Next Article in Journal
Residents’ Experiential Knowledge and Its Importance for Decision-Making Processes in Spatial Planning: A PPGIS Based Study
Previous Article in Journal
Graph Neural Network for Traffic Forecasting: The Research Progress
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Developing a Model to Express Spatial Relationships on Omnidirectional Images for Indoor Space Representation to Provide Location-Based Services

by
Alexis Richard C. Claridades
1,2,
Misun Kim
1 and
Jiyeong Lee
1,*
1
Department of Geoinformatics, University of Seoul, 163 Seoulsiripdae-ro, Dongdaemun-gu, Seoul 02504, Republic of Korea
2
Department of Geodetic Engineering, University of the Philippines Diliman, Quezon City 1101, Philippines
*
Author to whom correspondence should be addressed.
ISPRS Int. J. Geo-Inf. 2023, 12(3), 101; https://doi.org/10.3390/ijgi12030101
Submission received: 10 January 2023 / Revised: 22 February 2023 / Accepted: 26 February 2023 / Published: 1 March 2023

Abstract

:
The unavailability and fragmentation of spatial data are challenges in creating realistic representations of objects and environments in the real world, especially indoors. Among the numerous methods for representing indoor space, the existing research has shown the efficiency and effectiveness of using omnidirectional images. However, they lack information on spatial relationships, so spatial datasets such as the Node-Relation Structure (NRS) must be used to provide location-based services (LBS). This study proposes a method for embedding topological relationships on omnidirectional images, and correspondingly extracting NRS data to enable the expression of these relationships on the images. These relationships include the connectivity of relations among the indoor subunits, and the containment of relations between the spaces and indoor facilities on the image data. This model allows for the construction of an image-based indoor space representation for providing LBS. This paper also demonstrates an approach to utilizing these datasets through an image-based platform that enables the direct performance of spatial analysis relevant to LBS on the images, and provides the accurate visualization and expression of the spaces and indoor points of interest data representing indoor facilities. This paper also includes an experimental implementation to demonstrate the potential of our model for providing an efficient space representation and the handling of basic spatial queries for indoor space applications.

1. Introduction

Current geospatial applications vary according to target fields and priorities, and these result in developing spatial datasets differently and with fragmented specifications [1]. Hence, studies must examine how these data can provide users with spatial services. Furthermore, applications also demand various construction methods according to the real world’s scale, space aspect, and purpose [2,3], influencing which aspect of indoor space the data will represent. Often, sources of data for producing indoor navigation routes are Building Information Models (BIMs) [4] or laser scan data [5]. BIM datasets are not always available, and 3D point clouds are time-consuming and expensive to collect [6].
Omnidirectional images, used mostly in street views [7], provide a 360-degree field of view at a certain point. Their relatively small data size and simple structure make them the preferred method of representation of space in various studies, given its rich visual content [8]. While users of LBS applications need to perceive the visual aspect of indoor space accurately, identifying the spatial entities portrayed in the images is necessary to provide services to users. To date, applications involving geotagging processes [9,10] and augmented reality [11,12] have presented approaches to identifying these spatial entities, which include the spaces and objects contained within these spaces. However, these methods are limited to labeling the image data to display textual information such as their name and attributes.
In providing LBS to users, displaying these spatial entities and tagging the pixels with attributes is insufficient. For example, in indoor navigation applications, spatial analysis such as route calculations requires the connectivity information of the spaces that are visually portrayed on the images. Similarly, the management of indoor facilities involves identifying the objects within each space. With this, image-based spatial applications must also portray information on spatial relationships, specifically topological relations, among the spaces and between the spaces and the objects inside.
Hence, in developing applications using image datasets, it is necessary to supplement them with topology information to provide service, as shown in Figure 1. A method to recognize topological relations from the image pixels is necessary to enable spatial analysis using the images, and the execution of LBS functions. Hence, this paper aims to develop a data model that can express the spatial relationships of indoor space entities into omnidirectional images that are collected in indoor space. As such, this study proposes a method to embed such relationships from network-based topological data on the images to construct an image-based indoor space representation to provide indoor LBS. This work aims to demonstrate that the image-based expression of spatial relationships enables basic spatial functions that support the provision of services to users. As the information on indoor objects is also vital for developing LBS applications, the proposed model not only represents spatial relationships between the indoor spaces, but also relationships between the spaces and their contained indoor facilities.
Previous studies that have utilized images in providing indoor LBS have used linking methods with separate network-based topological data through reference datasets or coordinate thresholding methods. In contrast to these studies, this research aims to demonstrate that network-based topology data may be extracted from the topology-embedded image data to facilitate spatial analysis. This process allows for the development of an integrated environment to utilize the omnidirectional images alongside the topology data to enable an image-based analysis of indoor spaces. Additionally, this environment provides a method to visualize the indoor POI representing the indoor features, and allows basic query capabilities through the omnidirectional images. This paper illustrates how to establish positions of the indoor features on the omnidirectional images without transformation to a global coordinate reference system (CRS) to enable their management within the image-based platform. To support the proposed concept, the researchers perform experimental implementation on sample data to demonstrate this image-based LBS-oriented environment as a holistic representation of space in a simple yet effective manner.
The paper is organized as follows. The next section discusses and analyzes the literature relevant to indoor space models, and the use of omnidirectional images in spatial analysis. The following section discusses the study’s methodology, including data requirements and the theoretical concepts that the researchers have utilized. The following section discusses the results of the implementation that the researchers have performed on the sample data, and the final section concerns conclusions and future studies.

2. Related Studies

This section reviews studies on using a variety of datasets in indoor space representation and how this variety challenges researchers to develop an accurate and efficient portrayal of the spaces intended to provide services to users. This section also explores previous studies’ efforts to integrate differently sourced and formatted spatial data in developing LBS applications in indoor spaces.
As the real world becomes more complex as urbanization accelerates, the inconsistencies and fragmentation in spatial datasets representing the spaces are also challenged similarly [13]. The spaces that spatial datasets aim to represent also grow in complexity, demanding datasets that can handle their internal components’ visualization and query analysis [14]. Spatial datasets are essential, especially in indoor spaces, where applications for navigation and wayfinding are lacking compared to their outdoor counterparts [15].
Multiple datasets represent the same spaces in the physical world in spatial applications [2], especially indoors [3]. As spatial data generation inherently contains abstraction, each can only focus on a few specific aspects of the real-world object it represents. In applications, spatial data are constructed based on real-world spatial objects onto various data formats and specifications according to the purpose of using these data or commercial software. Hence, using only one of these formats across all necessary data are impossible as this cannot wholly satisfy the visualization and analysis required in LBS applications. The difference in data types continually hinders this task, as each aspect of space requires specific abstraction methods and various data formats.
In other words, the same spatial object in the real world is constructed with various data, such as vector-based, raster-based, image-based, or attribute-based, depending on the purpose, and is constructed in various data formats [2]. These datasets are essential, especially in indoor spaces, where applications for navigation and wayfinding are lacking compared to their outdoor counterparts [15]. For example, while 3D laser scans are currently the trend for generating Building Information Models (BIMs), this presents hurdles in the cost of construction both economically and computationally [6]. Models from building floor plans from Computer-Assisted Drawings (CAD) may be more accessible, but they lack dimensionality and thematic information, and are often outdated [16]. The trade-off between coordinate precision versus cost, and accessibility versus being up to date, remains a problem in the generation of indoor spatial data, causing it to be the bottleneck of the application development process. So, even though such data sources for representing indoor space provide precise positioning and visualization, they are labor-intensive and costly to produce [17,18].
The published literature has utilized omnidirectional images to bridge these trade-offs. Even though images are more accessible to collect than laser scans, they provide a similar, if not better visual depiction of the actual space condition. These are also faster to collect, involve cheaper sensors, and require less personnel training. Studies have used these datasets for cost-efficiency in capture and processing [19,20]. In outdoor spaces, these images provide ground-level panoramic views or street views. They serve as an alternative source of landscape information instead of satellite images [21], building façade textures [9], and as a basis for geolocation based on skyline information [22].
In indoor spaces, algorithms have presented ways to extract room layouts from these images [23,24], and publicly accessible web maps now include indoor 360-degree views [7,25,26]. The literature has also shown that map users can self-locate and orient themselves in 3D maps, compared to 2D counterparts, in addition to enhanced spatial recall for decision-making [27]. Hence, projecting these images that allow for a 3D view at a single point provides excellent spatial context for the map user. However, the pixel data on the images still lack the information required to execute indoor spatial applications outside visualization, as only color information is present.
Recent research has aimed to relay object information visualized on image data to users of spatial applications. Studies use geotagging techniques to relate location information to images for relaying textual information, such as names and attributes, to visualize locations in disaster situations [10] and to determine the locations of street views using aerial images [9]. Applications that developed augmented reality platforms have presented methods to relay POI attributes [11] and to embed objects on virtual scenes for visualization using omnidirectional images [12]. While these approaches have used techniques to associate location information on images, these are limited to only the visualization of such objects and their corresponding attributes in images. Hence, these methods are still incapable of handling the spatial analysis functions necessary for LBS.
Among the numerous forms of indoor datasets, topological data remains vital in supporting applications involving spatial queries [28,29], especially for indoor navigation [30]. The Node-Relation Structure (NRS) is generated based on Poincare Duality, where a 3D object is abstracted as a node, and the respective spatial relationships (connectivity and adjacency) as edges [14], as shown in Figure 2. It is a network-based representation of the topological relationships of spaces. The NRS has been the basis of the international spatial data standard IndoorGML [31] of the Open Geospatial Consortium (OGC). Studies have shown that a network-based topological representation is more efficient in connectivity and path analysis than in other forms [29]. Along with other indoor space data, it enables various analyses that enrich services that applications provide to users by providing information on spatial relationships [19].
In developing spatial services, it is insufficient to represent only the spaces. While the actual spaces host the occupants’ activities, users are often more interested in what is inside those spaces. Indoor POIs provide a simple yet efficient way to describe these objects, and they are an essential element of LBS across various service goals [32]. Indoor POIs are essential in developing an LBS encapsulating the indoor environment’s complexity [33]. These also function as targets of agent navigation when integrated with network-based topological data, more than just simple labels of features or events in indoor space [33]. Studies have demonstrated their use, along with omnidirectional images, as targets of spatial queries [19]. The published research has also shown how integration with network-based topological data for navigation applications is feasible [33]. This integration is an implementation of the extension of the Multi-Layer Space Model (MLSM) concept, by defining a “within” relationship between an NRS node in one space layer, and a set of nodes representing indoor POIs in a separate space layer.
Since images contain only pixel information, these do not directly convey the topological relationships of the spaces, such as connectivity. As expressed in NRS, such relationships allow identifying these topological relationships, not just among the spaces, but also between the spaces and the objects within. Studies have shown various methods to relay topological information from network-based data through the pixels for providing indoor spatial services through images. In the basic concept of representing spaces and facilities using NRS data, each real-world object, such as spaces and facilities, is a node, while their respective spatial relationships are edges.
Jung and Lee (2017) proposed an extension of the IndoorGML core model to represent a sub-unit of indoor space through an omnidirectional image. The results of indoor spatial analysis are relayed to users on a display of the omnidirectional image, even though the application performs the actual spatial analysis on the underlying linked vector data, such as a building floor plan [19]. This method allows for the expression of each node through an image, allowing for the detection of spaces on the pixels. On the other hand, the concept of a Spatial Extended Point (SEP) by Ahn et al. (2020) proposes a more direct method that defines a 3D threshold around a node in the NRS. The algorithm formulates a relationship between the images and the network-based topological data through a cursor position-based search to eliminate the need for reference data [34].
In both studies, a separate dataset representing the space relationships is necessary for implementing LBS through the images. Both approaches supplement the image data with external network-based topological data to offer users spatial services. While providing a method to represent indoor space using various data is an important target, reducing reliance on these integration methods for external datasets is crucial. Not only will this reduce the costs of data production for application developers, but it will also limit the need for data conversion or data fusion steps that may become causes of data loss. This research proposes a method to provide an integrated indoor space environment based on omnidirectional images that are capable of providing spatial services by generating NRS data from the images and integrating indoor POI information.

3. Development of a Model to Express Spatial Relationships between Omnidirectional Images

This section describes the basis for implementing an image-based indoor space representation to provide location-based services directly to users. This research aims to generate not only the visualization of indoor space, but also the necessary spatial data that enables analysis and query, using mainly omnidirectional images. First, this section discusses a method of representing indoor space using omnidirectional images to embed information on spatial relationships, and illustrates how to determine positions of indoor spatial entitites from such images. Moreover, this section illustrates the establishment of a relationship between the image and the topological representation of space by extracting NRS data and how to use this relationship to detect spatial entities on the images.
While the images are excellent for visually portraying the space, they cannot be used directly for spatial analysis because they do not directly express spatial relationships. To provide the spatial relationships necessary to enable spatial analysis on the images, the researchers use NRS, a network-based topological data. This model explicitly expresses two types of topological relations among spatial entities. First, it expresses the connectivity relationship among the spaces (such as rooms and hallways), and second, the containment relationships between those spaces and the indoor objects (i.e., facilities). These data express these relationships through a duality transformation, representing spaces as nodes and topological relationships as edges. Together, these primitives form a graph that expresses how spaces and features are related to each other.
Figure 3 illustrates the spatial entities that this study aims to portray, using the image data. The right side shows the four rooms as space nodes connected by edges representing their respective connectivity relationships. Similarly, nodes also represent the facilities, connected by edges to the spaces containing them, representing a within-type relationship. A one-to-one correspondence between objects and the NRS nodes is quickly understandable, but employing this concept on images where a group of pixels depicts indoor spaces or objects, not as a geometric object, is not a straightforward process. Hence, a method to express the relationships contained in the NRS data through the image data is necessary.

3.1. Representing Indoor Space Using Omnidirectional Images

This paper aims to use omnidirectional images to portray the spatial entities in indoor space. Compared to ordinary photos, these images are taken with a 360-degree horizontal field of view (FOV), covering a broader vantage point. Furthermore, these images are obtained parallel to the level plane, as this image orientation is more suitable for reconstructing spaces than oblique or nadir images [35]. Furthermore, it is more flexible, given the indoor space constraints regarding image capture, because fewer images must be collected and processed to obtain a complete representation of space.
Since an image is a snapshot of a particular location, a single omnidirectional image discretely expresses space at a singular point. For reconstructing indoor space, discrete images are insufficient for some areas, such as large or intricate structures, where the extent must be perceivable as entirely as possible, despite physical constraints and complexities. Hence, representing the continuity of indoor space using image data requires information on the connections of these images. In this case, portraying the entire area requires a set of omnidirectional images taken within the vicinity of each other. These connections must be determined based on the locations of image capture, referred to as shooting points [19,34].
Figure 4 illustrates how to implement the embedding of the connectivity relationships of spaces in constructing the omnidirectional image data. Consider a 3D space representing a building in Figure 4a. To identify how omnidirectional images can represent the indoor space, how the spaces may be subspaced as individual indoor sub-units must be determined. This method considers each of the rooms in the building as an individual subspace, which is the individual unit of space at the building floor level [36]. Moreover, this approach also considers the geometric characteristics of large spaces, such as the hallway R5, and applies the indoor subspacing [31,37] to subdivide it into subspaces R5a, R5b, R5c, and R5d in Figure 4b. This paper’s proposed method identifies these subspaces spaces as the locations to capture image data to represent them using omnidirectional images. Hence, an individual omnidirectional image is captured at image shooting points corresponding to each subspace, such as in Figure 4c for subspaces R10, R11a, and R11b.
The connection information of these omnidirectional images must be defined to represent the continuity of indoor space. This study represents a sub-unit of indoor space as a Scene through an omnidirectional image displaying the view at that specific location. Hence, a group of interconnected scenes portrays the continuous indoor space. A Linkpoint defines the connection of a scene to another scene. This linkpoint is a pixel location within a scene corresponding to the shooting point where the image for the connected scene is captured. Thus, a scene contains at least one linkpoint defining the connections between two scenes.
Figure 5 illustrates the concept of embedding the connectivity relationships to the omnidirectional images based on the locations of the shooting points. This figure shows how the structure scenes are composed by defining the linkpoints, reflecting the subspaces and the images shown in Figure 4c. Scene R10 contains Linkpoint R10-1, which connects to Scene R11b. Correspondingly, Scene R11b has Linkpoints R11b-2 that links back to Scene R10 and Linkpoint R11b-1 that connects to Scene R11a. Then, Scene R11a has Linkpoint R11a-1 that connects back to Scene R11b.

3.2. Defining Positions of Spatial Entities on the Image Data

Within a scene, a linkpoint signifies the location of the shooting point for a separate scene. The researchers construct these scenes based on omnidirectional images with an equirectangular projection that produces a spherical view. This section defines a local coordinate system based on the omnidirectional image, assumed as a unit sphere with a given radius, to calculate the coordinates of the features (i.e., pixels) within the scene. So, rather than pixel coordinates based on a 2D cartesian system, the pixels have polar coordinates, i.e., the vertical angle (v) from the horizontal plane, and the horizontal angle (h) from a fixed meridian (e.g., north). Hence, as shown in Figure 6, the position of the linkpoint R11a-1 within Scene R11a is in the pixel location in the direction (h, v) of the shooting point for Scene R11b.
Since these polar coordinates are local coordinates (i.e., angular coordinates within a scene’s unit scene), two spatial entities may have the exact polar coordinates despite being two different features within separate scenes. This uncertainty may cause problems in uniquely identifying spaces or indoor features. Hence, the unique coordinates for these features must be calculated based on the information from their containing scenes. To overcome this, a pixel’s estimated 3D cartesian coordinates (X, Y, Z) must be obtained. Using the 3D cartesian coordinates of the scene’s shooting point, the polar coordinates of the pixel may be converted to 3D cartesian coordinates. Figure 6 also illustrates this calculation. This approach defines the omnidirectional image as a unit sphere, so an appropriate radius value can be defined based on the space it represents. The size of this unit sphere depends on the size of the subspace that this scene represents. For example, most features are located along the walls (lateral direction) rather than along the direction of the hallway. Hence, the unit sphere is dictated by how far the shooting point is from the wall. Hence, a scene in the hallway may have a radius equal to half of the hallway’s width [34]. The room dimensions dictate this radius value for rooms if the whole room is considered to be a single subspace based on the process shown in Figure 4. From this radius and the 3D cartesian coordinates of the shooting point, unique 3D cartesian coordinates of each indoor POI can be calculated.
Apart from the connectivity relations embedded in the image data, a representation of indoor space for LBS applications also requires a representation of the features that are present within indoor spaces. This representation also includes portraying the topological relationships between these features and their containing spaces. In the NRS, the features are also represented as nodes, having a containment relationship with the node representing the space that contains them. This model also expresses this containment relationship as an edge. Hence, this paper represents the indoor features expressed on the images as indoor POIs, which are point primitives that are consistent with the NRS. Using point primitives allows for a more straightforward and compatible process to combine with the network-based topological data, compared to alternatives such as 3D geometric objects [33].
In contrast to outdoor POIs that are uniquely identifiable (i.e., as a unique ID) through their names (e.g., “GNB bank”, “Namsan Tower”, etc.), indoor POIs can only have generic names, such as in the case of facilities (e.g., “fire extinguisher”, “bulletin board”, etc.). Hence, a location is necessary to identify an indoor POI with the name uniquely. This idea is consistent with the basic definitions of a POI from the World Wide Web Consortium (W3C) [38] and the OGC [39]. Hence, this model must similarly define local coordinates to obtain unique positions for these features within the images.
This research denotes indoor POIs as point locations on the image pixels, which the researchers also implement as point features within the omnidirectional image, as shown in Figure 7. These locations are associated with the scene which contains it, and they represent a position within the image for the feature it represents. Hence, within a scene, only indoor POIs visible within that scene are identifiable. Calculating these features’ local coordinates is similar to determining the local positions of the linkpoints within a scene. Hence, the coordinates POI for the poster in Scene R11b may be calculated based on the coordinates of the shooting point for that scene.
Since the omnidirectional image approximates the space that it represents through a sphere, the corresponding 3D coordinates of the features calculated from the above method are also approximate. There might be discrepancies with the actual coordinates of the objects (i.e., when measured using positioning devices or captured from precise indoor geometric datasets such as BIM or LiDAR). However, in this paper, these approximate coordinates are sufficient for our purpose, since these positions are only necessary to identify the presence of these indoor POIs within the image-based (i.e., scene) environment.

3.3. Generating NRS Data from the Topology-Embedded Image Data

The previous subsections discuss the representation of 3D indoor spatial entities through embedding corresponding topological relationships on omnidirectional images. Similarly, NRS data defines a dual graph composed of nodes having a 1:1 relationship with each spatial entity and edges for the same topological relationships. As Figure 8 illustrates, these datasets represent the same space, despite being composed of pixels and discrete primitives, respectively. In order to overcome the limitations of expressing a direct correspondence between images and NRS shown in Figure 3, this section discusses establishing an understandable relationship between these datasets. This understandable relationship enables the use of these datasets together for providing LBS.
This research establishes the relationship between these representations of the same indoor space by extracting the NRS data from the topology-embedded image datasets. The scenes correspond to a node representing the space describing the location of a shooting point at an indoor subspace. The linkpoints represent the logical connections between the scenes and abstract the network’s edges or links, and these nodes and edges complete the logical NRS representing the indoor space. The omnidirectional image data and the derived NRS data comprise the essential parts of the indoor LBS platform, containing information on relationships among the spaces depicted in each image.
This logical NRS is useful in network-based applications that require only topological relations, such as connectivity analysis [14]. Such NRS can be transformed into a geometric NRS regardless of whether the omnidirectional images are georeferenced. As in the previous section, the coordinates of the shooting points are available from the image capture process. Given these coordinates, the approximate length of each edge can be calculated, and the graph can contain this information to become a geometric NRS that provides geometric information for specific analyses such as finding optimal pathfinding, allocation, or spatial information analysis. The directional information for each edge can also be calculated from these coordinates to reduce ambiguity in performing queries, which may be necessary for certain applications. For example, in certain route analysis cases, some edges may represent a space that only allows for certain directions of travel, or in applications that consider user preferences. However, managing indoor facilities is also a necessary function of LBS. Hence, the image data for these services must also be able to portray the spatial relationships between the space, and the feature to conduct spatial analysis, and the NRS generation process must also include the POIs.
As the process discussed in Section 3.1 embeds the images with information on their connections through the linkpoints, the NRS dataset representing the connectivity relationships of the indoor subunits that these images represent can be derived. Figure 9 illustrates the process of extracting the NRS data from the topology-embedded image data. This process converts each scene representing a subunit of indoor space to a node in the NRS. It then determines if a scene contains a linkpoint, and the researchers create a connectivity node–edge relation for every scene and a linkpoint it contains. A weight value may be designated for each linkpoint that may serve as an impedance when calculating routes, such as a distance value for that edge. If the scene contains a POI, the algorithm creates a node–edge relation representing containment. Each corresponding node’s positions are also collected based on the method presented in the previous section.
Taking the example of scenes illustrated in Figure 5 and the POI shown in Figure 7, Figure 8 also shows the resulting NRS data represented by the connections of the scenes generated from the image data. The nodes n10, n11a, and n11b represent Scenes R10, R11a, and R11b, respectively. This figure shows the corresponding connectivity relations based on the linkpoints in the scenes as red edges. Furthermore, the POI representing the poster inside Scene R11b is also present in the NRS data as a node, with an edge denoting their containment relationships to the space denoted by node n11b.

3.4. Identifying Spatial Entities on the Topology-Embedded Images

This section discusses the identification of the spatial entities on the topology-embedded omnidirectional images. However, a method to relate a user position input, such as a mouse click on an image pixel with the extracted NRS data, is necessary. User input on the image data, a mouse click, initializes the detection of a spatial entity in the scene. Using the concept proposed by Ahn et al. (2020) [34], the SEP matrix between the user click and an NRS node based on their position is calculated.
In the previous implementation of this concept [34], SEP matrices for all nodes were calculated every time a single user click was made. In order to streamline this process for a more efficient calculation, the algorithm narrows down to the nodes representing the connected spaces and contained features to the space represented by the currently displayed scene. Then, the SEP region for each is established, and the user click position is compared to calculate the SEP matrix, to define which space is identified by this click is established. Moreover, in this previous implementation, space or feature identification using this threshold resulted in only one individual identified result, regardless of multiple SEP regions containing the user click. However, to fully resolve the conflict of the imprecision of clicks on a computer screen versus the exactness in calculating SEP values, this proposed algorithm introduces a selection for multiple outcomes, i.e., more than one SEP region contains the user click. Hence, a user click successfully identifies a spatial entity in the scene once a single SEP region containing that click is determined.
Detecting a space within an omnidirectional image essentially identifies a connected scene to the present scene at a selected position. This functionality is helpful for manually navigating from scene to scene if the detected scene is displayed. On the other hand, detecting the presence of an indoor POI in a scene using this process is synonymous with identifying an object within that space, and LBS applications may use this to display the attributes of an object within that scene. If the algorithm detects multiple spaces or features within the region, a selection dialog opens for the user to select based on the vicinity of the click position. If the click does not detect any spatial entity, a warning message to click elsewhere is displayed, and the user is prompted to repeat their mouse input. Figure 10 summarizes the implementation of this process in detail.
As in Ahn et al. (2020), Figure 11 shows how the SEP method defines containment regions for each node representing its location’s zones of influence, which accounts for the uncertainty of the user’s selected position in the image [34]. If a user intends to click on the doorknob in the image to signify their intent to open the door and enter the room, the SEP method resolves the imprecision of this click, which might not fall directly on the exact pixel location of the doorknob in the image. This figure also shows the calculation of the SEP matrix, quantifying the relationship between this region and the user’s cursor position, and determines the presence of a spatial entity in that click. The user’s click position C0 is compared to the boundary of the SEP region in the XY, YZ, and XZ planes to determine if it is contained in the region.

3.5. Implementing Navigation on the Topology-Embedded Images

As shown in the previous section, spaces in the images can be detected using a user’s click. Hence, within a scene displayed on the user’s screen, this detection may trigger the visualization of the detected space corresponding to another scene. This action enables the visualization of navigation between the spaces represented in each scene within the image data. As the omnidirectional images provide an eye-level perspective, it provides navigation visualization as if the user navigates continuously along the indoor space, through continuous clicks that detect spaces on the images. However, automating this visualization of navigation along the spaces without continuous clicking requires the calculation of a route from a desired source and destination locations, based on network topology data.
However, the extracted NRS data from the image data enable the implementation of routing-based queries to determine optimal paths. This topology-embedded image data also allows for the automation of navigation visualization between locations within the indoor space. This method obtains the user’s desired start and destination, which may either be scene names or POI names, as input. If the user inputs a POI name, the algorithm first searches the scene containing that POI and assigns that scene as either the start or the destination of the routing. The algorithm then determines the nodes corresponding to these two scenes, and calculates the shortest path between the two nodes using Dijkstra’s routing algorithm [40]. The resulting route is a node sequence corresponding to the optimal path from the desired origin and destination scenes. This process, shown in Figure 12, uses the sequence of scenes and corresponding rotation angles calculated from the positions of the linkpoints to display images sequentially and animatedly, to simulate automated navigation within an indoor space as if a user is walking along the scenes, since the projections of the photos match the user’s visual perspective.

4. Experimental Implementation

This section demonstrates the potential use of our proposed concept of omnidirectional image-based representation of indoor space and spatial relationships by discussing an experimental implementation on a sample dataset. In this implementation, the researchers construct a platform composed of the scenes built from omnidirectional images collected inside and in the vicinity of a building, including several indoor POIs, to demonstrate the implementation of indoor space analysis within an image-based environment.

4.1. Study Area and Experimental Datasets

The test site is an academic building at a university campus, as shown in Figure 13. Based on a 1:5000 scale topographic map and building floor plans, the researchers identified 45 shooting points where omnidirectional images were collected, including one elevator space, 3 rooms, 2 points near the exit, and 39 shots in the hallway. The same figure also shows the set of images taken for the second floor. The photos are collected as in the process discussed by Jung and Lee (2017), and Ahn et al. (2020) [19,34], where six images are taken at each point using the camera. For this implementation, the researchers used a DSLR camera with a fisheye lens mounted on an automatic rotator to collect the individual wide-angle images at each shooting point. The images are stitched in PTGui v12.11 to create a 360° horizontal field of view of each scene for the omnidirectional images.
Based on the positions of the shooting points along the indoor space, the researchers used a topology-authoring software called Panotour v2.5.14 to establish the connections of scenes through linkpoints. The connected images are established as an HTML file with accompanying XML files containing image metadata, and basic KRPano scripts based on JavaScript that enable the embedding of the omnidirectional images as scenes on a browser-based platform. This XML file contains scene references, including the attributes of the scene, such as the names, coordinates and viewing properties, e.g., default horizontal and vertical directions upon loading, including the field of view. This file also has information on the linkpoints established in each scene. This linkpoints section comprises the position attributes (polar coordinates within the scene) and the information on which scene links through that linkpoint.
To efficiently utilize the parameters in the image index XML file, the researchers use an iterative parsing algorithm developed in JavaScript to populate a database containing the image information. Figure 14 illustrates a suggested structure of the parameter database. The algorithm stores the parsed data in three main tables. First, the Scene table contains image parameters on all scenes, including all connected linkpoints. Similarly, the Linkpoints table contains data on the connection information on the scenes and the features. While this table contains similar attributes to the Scene table, having this table makes the calculations of the SEP convenient. The Feature table is a subset of the Linkpoints table, containing only the linkpoint information of the indoor features. The parameters in this database are the basis for extracting NRS data from the topology-embedded omnidirectional images.
The image capture step is the most labor-intensive and time-consuming of the data construction processes discussed in this section. This labor effort is primarily due to the need to still capture and stitch multiple images per shooting point, despite having a wide-angle lens, to capture the 360° horizontal field of view. While taking the pictures, unexpected obstructions (e.g., passersby or lighting obstructions) might also occur that may cause problems in the stitching step. However, commercially available 360° camera models have recently become more accessible and will significantly reduce the labor and time requirements for generating similar data in this study.

4.2. The Detection of Indoor Spaces and Features on the Image-Based Representation

Figure 15 and Figure 16 demonstrate the feature and space detection in the image. The algorithm calculates the SEP regions of linkpoints within an image as soon as the user either double clicks or presses the mouse above a certain time threshold (i.e., a long click). In this implementation, a double click initiates the detection of an indoor space, while the algorithm uses long clicks for detecting POIs. The method compares the click position to the regions to determine where this click is contained, and produces results depending on the mouse action. Figure 15 shows a long click within the region of a linkpoint, initiating a message containing the feature’s POI information.
Figure 16 shows the process of identifying a space on the image through a double-clicking action. In a scene depicting a space within an elevator, the connected scenes are accessible through linkpoints placed on the elevator buttons. Since these buttons are close to each other, the SEP regions are overlapping, and uncertainties in the position of the user click introduce fuzziness in space detection. Hence, if users select a pixel where the SEP regions overlap, they must choose either the “Level2” or “Level6” space. The platform displays the appropriate scene, depending on the user’s choice.

4.3. Implementing Automated Navigation in the Indoor Image-Based Environment

The automated scene-to-scene navigation described in Section 3 is initiated by prompting the user for the desired start and destination scenes through a direct input in the HTML interface, as shown in Figure 17. The platform also prompts the user to select the basis of the calculation, either on weighted or unweighted NRS data. This implementation uses Dijkstra’s optimal path algorithm based on weights calculated from the shooting points’ distances, or equal weights along the edges, respectively. Our algorithm extracts the corresponding scenes of each node in the calculated route and scene display parameters from the reference database.
Suppose a user enters the texts “2nd Floor Lobby” and “Room 607” as their desired route’s origin and destination points. Figure 17 illustrates the sequence of the scenes resulting from the automated navigation process based on these inputs, starting from the origin on the second floor towards the sixth floor. Figure 17a shows the images shown in the platform while the automated navigation is ongoing. On the other hand, Figure 17b shows the route along a portion of the building network. Figure 17c shows a detailed visualization of this route with the view directions for each scene in small arrows above the image labels.
From the starting point at the second-floor lobby at its corresponding scene with the view in Figure 17(a1), the navigation guides the user towards the following location shown through the next scene in Figure 17(a2). Along this movement, animations include zooming toward the next scene’s direction, and transition effects for more realistic visualization. At the same point, the scene rotates to show Figure 17(a3) to orient the user towards the direction of the elevator. The rotation angle from view (2) to (3) used to pivot the scene is calculated from the Scene array parameters by subtracting the default horizontal view angle of the scene in (2) from the horizontal position of the linkpoint representing the next scene (the next node in the calculated route) at the elevator.
The navigation continues to Figure 17(a4) inside the elevator. It rotates similarly to show the view in Figure 17(a5) towards the linkpoint, pointing to the elevator button for the sixth floor. Then, the view once the elevator opens for the sixth floor, as shown in Figure 17(a6), rotates towards the direction of the hallway in Figure 17(a7). The navigation continues toward Figure 17(a8) along the corridor, rotates at Figure 17(a9), moves to Figure 17(a10) in front of the target room, and turns in the direction of Room 607’s door at Figure 17(a11). Finally, the platform displays the view inside Room 607 as the final destination of the visualization at Figure 17(a12).
This experimental implementation illustrates the integration of distinct spatial datasets representing different aspects of space through the proposed model for embedding the topological data in omnidirectional images for providing indoor LBS services. This implementation uses omnidirectional images to generate interconnected scenes to build an image-based representation portraying the continuity of indoor space. From the image data, the researchers generate NRS data representing the connectivity relations of the indoor subspaces. This work also proposes a process to enable the identification of the spaces through a method relating pixel data and position information. Objects within these spaces, represented by indoor POI data, may also be detected on the image pixels through the same process. The generated NRS data also allow for the calculation of routes within the spaces, which enables the visualization of an automatic movement along the calculated path, producing a realistic visualization along the perspective provided by the omnidirectional images.

5. Conclusions and Recommendations

The fragmentation of spatial datasets in structure and information content remains an obstacle to generating accurate digital representations of spaces. As various types of data represent the respective characteristics of indoor space, establishing LBS applications requires integration to provide end-users with more insightful knowledge. This study presents a model for expressing the spatial relationships of spatial entities, including indoor spaces and features, into omnidirectional images. This paper uses this model to provide an image-based reconstruction of indoor space that is capable of handling spatial analysis, and proposes a method for extracting NRS data from the topology-embedded images. This data expresses the connectivity relationships between the indoor spaces, and the containment relationships between the indoor facilities and the respective indoor spaces.
This paper also presents a method for identifying the spatial entities on the topology-embedded image data, and an algorithm for automating the visualization of navigation along these images. The researchers performed an experimental implementation of the integration of omnidirectional images, NRS data, and indoor POI data on sample data for a university campus building, through a web browser-based application. This implementation also presents an integrated platform that enables a realistic visualization and analysis of the spaces portrayed in the images. This paper also demonstrates the facilitation of route-based queries within the image-based environment, and that it is feasible to conduct the query of indoor POI data using position data based on local coordinates defined in the omnidirectional images.
The proposed integration method contributes significantly to the scientific literature, because applications still face barriers to using differently formatted spatial datasets in the same applications. Existing systems commonly use data conversion to remedy this concern, but the conversion processes often face possible information loss. Compared to previous studies that utilize images to reconstruct indoor spaces, this research demonstrates that it is possible to provide spatial services without time-consuming and costly processing for generating geometric data, such as point clouds, since the proposed method performed such services directly on the images. While studies performed such image-based topological analysis, network-based topology data such as NRS are a separate input data to be integrated into the images. In contrast, this paper proposes a method to generate the NRS data from the topology-embedded images to be used together to provide spatial services.
Nonetheless, this study has several limitations that future studies must tackle. First, while local coordinates based on the spherical polar coordinates of the omnidirectional images allow for consistent positioning for the indoor POI data and spatial queries, this poses difficulties when using similar data from outside this image-based environment. A consistent georeferencing method on the omnidirectional image to provide 3D positions based on a global CRS may solve this problem, or a way to transform local coordinates (i.e., the image) to global coordinates referred to from a global CRS (i.e., from the indoor POI database). This referencing will enable the importing of indoor POI data from an external database, or the exporting of the same for use in other applications. Second, one of the main motivations for using omnidirectional image data is their efficiency and low-cost collection. Still, image collection forms the bottleneck for generating the model, taking the most labor effort and time to produce. Hence, a systematic method for image data updating within the system is necessary to keep up with the fast-paced updates in the physical world, and to update the digital counterpart accordingly. Finally, the geometric NRS generated from the image data might include more information apart from distances, such as direction, capacity, and other similar attributes that might be used as impedance variables in route analysis.
Moreover, a study on developing a data model for image datasets intended for image-based LBS applications using NRS data is part of our future work. While the researchers constructed the image database illustrated in Figure 14 for the efficient storage and retrieval of image parameters, it is not intended for this model to be a standard for how applications must establish image datasets for LBS purposes. Finally, future applications may implement additional 3D query functions that apply to network-based topological data and indoor POI.

Author Contributions

Conceptualization, Jiyeong Lee and Alexis Richard C. Claridades; methodology, Jiyeong Lee and Alexis Richard C. Claridades; software, Alexis Richard C. Claridades and Misun Kim; validation, Jiyeong Lee, Alexis Richard C. Claridades and Misun Kim; formal analysis, Jiyeong Lee, Alexis Richard C. Claridades and Misun Kim; investigation, Alexis Richard C. Claridades and Misun Kim; resources, Jiyeong Lee and Alexis Richard C. Claridades; data curation, Jiyeong Lee, Alexis Richard C. Claridades and Misun Kim; writing—original draft preparation, Alexis Richard C. Claridades; writing—review and editing, Alexis Richard C. Claridades, Misun Kim and Jiyeong Lee; visualization, Alexis Richard C. Claridades, Misun Kim and Jiyeong Lee; supervision, Jiyeong Lee; project administration, Jiyeong Lee; funding acquisition, Jiyeong Lee. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (No. 2021R1A2C1013951).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Deng, Y.; Ai, H.; Deng, Z.; Gao, W.; Shang, J. An Overview of Indoor Positioning and Mapping Technology Standards. Standards 2022, 2, 157–183. [Google Scholar] [CrossRef]
  2. Ahn, D.; Park, J.; Lee, J. Defining Geospatial Data Fusion Methods Based on Topological Relationships. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. ISPRS Arch. 2018, 42, 317–319. [Google Scholar] [CrossRef] [Green Version]
  3. Chen, J.; Clarke, K.C. Indoor Cartography. Cart. Geogr. Inf. Sci. 2020, 47, 95–109. [Google Scholar] [CrossRef]
  4. Liu, L.; Li, B.; Zlatanova, S.; van Oosterom, P. Indoor Navigation Supported by the Industry Foundation Classes (IFC): A Survey. Autom. Constr. 2021, 121, 103436. [Google Scholar] [CrossRef]
  5. Gharebaghi, A.; Abolfazl Mostafavi, M.; Larouche, C.; Esmaeili, K.; Genon, M. Precise Indoor Localization and Mapping Using Mobile Laser Scanners: A Scoping Review. Geomatica 2022, 75, 1–13. [Google Scholar] [CrossRef]
  6. Houshiar, H.; Winkler, S. POINTO—A Low Cost Solution to Point Cloud Processing. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2017, XLII-2/W8, 111–117. [Google Scholar] [CrossRef] [Green Version]
  7. Google, L. Google Street View. Available online: https://www.google.com/streetview/understand/ (accessed on 3 August 2022).
  8. Maugey, T.; le Meur, O.; Liu, Z. Saliency-Based Navigation in Omnidirectional Image. In Proceedings of the 2017 IEEE 19th International Workshop on Multimedia Signal Processing (MMSP), Luton, UK, 16–18 October 2017; pp. 1–6. [Google Scholar]
  9. Bansal, M.; Sawhney, H.S.; Cheng, H.; Daniilidis, K. Geo-Localization of Street Views with Aerial Image Databases. In Proceedings of the the 19th ACM International Conference on Multimedia, Scottsdale, AZ, USA, 28 November–1 December 2011; Association for Computing Machinery: New York, NY, USA, 2011; pp. 1125–1128. [Google Scholar]
  10. Ham, Y.; Kim, J. Participatory Sensing and Digital Twin City: Updating Virtual City Models for Enhanced Risk-Informed Decision-Making. J. Manag. Eng. 2020, 36, 04020005. [Google Scholar] [CrossRef]
  11. Ruta, M.; Scioscia, F.; Ieva, S.; de Filippis, D.; di Sciascio, E. Indoor/Outdoor Mobile Navigation via Knowledge-Based POI Discovery in Augmented Reality. In Proceedings of the 2015 IEEE/WIC/ACM International Joint Conference on Web Intelligence and Intelligent Agent Technology, Singapore, 6–9 December 2015; WI-IAT 2015. IEEE: Singapore, 2016; Volume 3, pp. 26–30. [Google Scholar]
  12. Chheang, V.; Jeong, S.; Lee, G.; Ha, J.S.; Yoo, K.H. Natural Embedding of Live Actors and Entities into 360° Virtual Reality Scenes. J. Supercomput. 2020, 76, 5655–5677. [Google Scholar] [CrossRef]
  13. Claridades, A.R.; Lee, J. Defining a Model for Integrating Indoor and Outdoor Network Data to Support Seamless Navigation Applications. ISPRS Int. J. Geoinf. 2021, 10, 565. [Google Scholar] [CrossRef]
  14. Lee, J.; Kwan, M.P. A Combinatorial Data Model for Representing Topological Relations among 3D Geographical Features in Micro-Spatial Environment. Int. J. Geogr. Inf. Sci. 2005, 19, 1039–1056. [Google Scholar] [CrossRef]
  15. Dong, W.; Qin, T.; Liao, H.; Liu, Y.; Liu, J. Comparing the Roles of Landmark Visual Salience and Semantic Salience in Visual Guidance during Indoor Wayfinding. Cart. Geogr. Inf. Sci. 2020, 47, 229–243. [Google Scholar] [CrossRef]
  16. Shen, G.; Chen, Z.; Zhang, P.; Moscibroda, T.; Zhang, Y. Walkie-Markie: Indoor Pathway Mapping Made Easy. In Proceedings of the 10th USENIX Symposium on Networked Systems Design and Implementation, NSDI 2013, Lombard, IL, USA, 2–5 April 2013; pp. 85–98. [Google Scholar]
  17. Xiong, X.; Adan, A.; Akinci, B.; Huber, D. Automatic Creation of Semantically Rich 3D Building Models from Laser Scanner Data. Autom. Constr. 2013, 31, 325–337. [Google Scholar] [CrossRef] [Green Version]
  18. Pintore, G.; Gobbetti, E.; Scopigno, R. Mobile Reconstruction and Exploration of Indoor Structures Exploiting Omnidirectional Images. In Proceedings of the SIGGRAPH ASIA 2016 Mobile Graphics and Interactive Applications, New York, NY, USA, 5–8 November 2016; ACM: New York, NY, USA, 2016. [Google Scholar]
  19. Jung, H.; Lee, J. Development of an Omnidirectional-Image-Based Data Model through Extending the IndoorGML Concept to an Indoor Patrol Service. J. Sens. 2017, 2017, 5379106. [Google Scholar] [CrossRef] [Green Version]
  20. Pintore, G.; Garro, V.; Ganovelli, F.; Gobbetti, E.; Agus, M. Omnidirectional Image Capture on Mobile Devices for Fast Automatic Generation of 2.5D Indoor Maps. In Proceedings of the 2016 IEEE Winter Conference on Applications of Computer Vision, Lake Placid, NY, USA, 7–10 March 2016. [Google Scholar] [CrossRef]
  21. Ghouaiel, N.; Lefèvre, S. Coupling Ground-Level Panoramas and Aerial Imagery for Change Detection. Geo-Spat. Inf. Sci. 2016, 19, 222–232. [Google Scholar] [CrossRef] [Green Version]
  22. Ramalingam, S.; Bouaziz, S.; Sturm, P.; Brand, M. Geolocalization Using Skylines from Omni-Images. In Proceedings of the 2009 IEEE 12th International Conference on Computer Vision Workshops, Kyoto, Japan, 27 September–4 October 2009; ICCV Workshops. pp. 23–30. [Google Scholar]
  23. Zou, C.; Colburn, A.; Shan, Q.; Hoiem, D. LayoutNet: Reconstructing the 3D Room Layout from a Single RGB Image. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018; pp. 2051–2059. [Google Scholar]
  24. Lee, C.Y.; Badrinarayanan, V.; Malisiewicz, T.; Rabinovich, A. RoomNet: End-to-End Room Layout Estimation. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 4875–4884. [Google Scholar] [CrossRef] [Green Version]
  25. Kakao Kakao Maps. Available online: https://map.kakao.com/ (accessed on 3 August 2022).
  26. Naver Labs Releasing First of a Kind Large-Scale Localization Datasets in Crowded Indoor Spaces. Available online: https://europe.naverlabs.com/blog/first-of-a-kind-large-scale-localization-datasets-in-crowded-indoor-spaces/ (accessed on 3 August 2022).
  27. Liao, H.; Dong, W.; Peng, C.; Liu, H. Exploring Differences of Visual Attention in Pedestrian Navigation When Using 2D Maps and 3D Geo-Browsers. Cart. Geogr. Inf. Sci. 2017, 44, 474–490. [Google Scholar] [CrossRef]
  28. Gunduz, M.; Isikdag, U.; Basaraner, M. A Review of Recent Research in Indoor Modelling & Mapping. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. ISPRS Arch. 2016, 41, 289–294. [Google Scholar] [CrossRef]
  29. Ellul, C.; Haklay, M. Requirements for Topology in 3D GIS. Trans. GIS 2006, 10, 157–175. [Google Scholar] [CrossRef]
  30. Jamali, A.; Abdul Rahman, A.; Boguslawski, P.; Kumar, P.; Gold, C.M. An Automated 3D Modeling of Topological Indoor Navigation Network. GeoJournal 2017, 82, 157–170. [Google Scholar] [CrossRef] [Green Version]
  31. OGC (Open Geospatial Consortium) IndoorGML v.1.1. Available online: https://docs.ogc.org/is/19-011r4/19-011r4.html (accessed on 30 December 2022).
  32. Kim, K.; Lee, K. Handling Points of Interest (POIs) on a Mobile Web Map Service Linked to Indoor Geospatial Objects: A Case Study. ISPRS Int. J. Geoinf. 2018, 7, 216. [Google Scholar] [CrossRef] [Green Version]
  33. Claridades, A.R.; Park, I.; Lee, J. Integrating IndoorGML and Indoor POI Data for Navigation Applications in Indoor Space. J. Korean Soc. Surv. Geod. Photogramm. Cartogr. 2019, 37, 359–366. [Google Scholar] [CrossRef]
  34. Ahn, D.; Claridades, A.R.; Lee, J. Integrating Image and Network-Based Topological Data through Spatial Data Fusion for Indoor Location-Based Services. J. Sens. 2020, 2020, 8877739. [Google Scholar] [CrossRef]
  35. Tu, Y.H.; Johansen, K.; Aragon, B.; Stutsel, B.M.; Angel, Y.; Lopez Camargo, O.A.L.; Al-Mashharawi, S.K.M.; Jiang, J.; Ziliani, M.G.; McCabe, M.F. Combining Nadir, Oblique, and Façade Imagery Enhances Reconstruction of Rock Formations Using Unmanned Aerial Vehicles. IEEE Trans. Geosci. Remote Sens. 2021, 59, 9987–9999. [Google Scholar] [CrossRef]
  36. Claridades, A.R.C.; Choi, H.S.; Lee, J. An Indoor Space Subspacing Framework for Implementing a 3D Hierarchical Network-Based Topological Data Model. ISPRS Int. J. Geoinf. 2022, 11, 76. [Google Scholar] [CrossRef]
  37. Lee, J. A Spatial Access-Oriented Implementation of a 3-D GIS Topological Data Model for Urban Entities. Geoinformatica 2004, 8, 237–264. [Google Scholar] [CrossRef]
  38. World Wide Web Consortium (W3C) Points of Interest (POI) Working Group. Available online: https://www.w3.org/2010/POI/ (accessed on 30 December 2022).
  39. OGC (Open Geospatial Consortium) OGC Glossary of Terms. Available online: http://www.opengeospatial.org/ogc/glossary/ (accessed on 30 December 2022).
  40. Dijkstra, E.W. A Note on Two Problems in Connexion with Graphs. Numer. Math. 1959, 1, 269–271. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Various methods of representing indoor spaces using spatial data.
Figure 1. Various methods of representing indoor spaces using spatial data.
Ijgi 12 00101 g001
Figure 2. Using the concept of NRS to express topological relationships between 3D indoor spaces [14].
Figure 2. Using the concept of NRS to express topological relationships between 3D indoor spaces [14].
Ijgi 12 00101 g002
Figure 3. The difficulty of expressing topological relationships on the image data made of pixels.
Figure 3. The difficulty of expressing topological relationships on the image data made of pixels.
Ijgi 12 00101 g003
Figure 4. Collecting the omnidirectional images in the indoor subspaces: (a) 3D representation of a building; (b) subspaces composing the indoor space; (c) omnidirectional image captured at a shooting point for each subspace.
Figure 4. Collecting the omnidirectional images in the indoor subspaces: (a) 3D representation of a building; (b) subspaces composing the indoor space; (c) omnidirectional image captured at a shooting point for each subspace.
Ijgi 12 00101 g004
Figure 5. Defining the connections between the omnidirectional images through linkpoints.
Figure 5. Defining the connections between the omnidirectional images through linkpoints.
Ijgi 12 00101 g005
Figure 6. Implementing the positioning of indoor spaces through linkpoints in the omnidirectional image.
Figure 6. Implementing the positioning of indoor spaces through linkpoints in the omnidirectional image.
Ijgi 12 00101 g006
Figure 7. Method of determining local coordinates of indoor POI in the omnidirectional image.
Figure 7. Method of determining local coordinates of indoor POI in the omnidirectional image.
Ijgi 12 00101 g007
Figure 8. Concept of establishing relationships between the image and topological spatial data.
Figure 8. Concept of establishing relationships between the image and topological spatial data.
Ijgi 12 00101 g008
Figure 9. Extracting the NRS data from the image reference data.
Figure 9. Extracting the NRS data from the image reference data.
Ijgi 12 00101 g009
Figure 10. Relaying information on detected spaces and features on the images.
Figure 10. Relaying information on detected spaces and features on the images.
Ijgi 12 00101 g010
Figure 11. Detecting spaces and features from the images using user mouse input.
Figure 11. Detecting spaces and features from the images using user mouse input.
Ijgi 12 00101 g011
Figure 12. Process for using the images for automated navigation based on the route calculated on the extracted NRS data.
Figure 12. Process for using the images for automated navigation based on the route calculated on the extracted NRS data.
Ijgi 12 00101 g012
Figure 13. Shooting points and the captured images for a portion of the study area.
Figure 13. Shooting points and the captured images for a portion of the study area.
Ijgi 12 00101 g013
Figure 14. Image parameter database structure obtained from the XML file.
Figure 14. Image parameter database structure obtained from the XML file.
Ijgi 12 00101 g014
Figure 15. Detecting the indoor features on the images through the SEP method.
Figure 15. Detecting the indoor features on the images through the SEP method.
Ijgi 12 00101 g015
Figure 16. Implementation of space identification on the images through a double-click action.
Figure 16. Implementation of space identification on the images through a double-click action.
Ijgi 12 00101 g016
Figure 17. Results of automated navigation visualized on the topology-embedded image data: (a) display shown for each portion of the automated navigation; (b) route along the portion of the building network; (c) visualization of the route with annotated view directions.
Figure 17. Results of automated navigation visualized on the topology-embedded image data: (a) display shown for each portion of the automated navigation; (b) route along the portion of the building network; (c) visualization of the route with annotated view directions.
Ijgi 12 00101 g017
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Claridades, A.R.C.; Kim, M.; Lee, J. Developing a Model to Express Spatial Relationships on Omnidirectional Images for Indoor Space Representation to Provide Location-Based Services. ISPRS Int. J. Geo-Inf. 2023, 12, 101. https://doi.org/10.3390/ijgi12030101

AMA Style

Claridades ARC, Kim M, Lee J. Developing a Model to Express Spatial Relationships on Omnidirectional Images for Indoor Space Representation to Provide Location-Based Services. ISPRS International Journal of Geo-Information. 2023; 12(3):101. https://doi.org/10.3390/ijgi12030101

Chicago/Turabian Style

Claridades, Alexis Richard C., Misun Kim, and Jiyeong Lee. 2023. "Developing a Model to Express Spatial Relationships on Omnidirectional Images for Indoor Space Representation to Provide Location-Based Services" ISPRS International Journal of Geo-Information 12, no. 3: 101. https://doi.org/10.3390/ijgi12030101

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop