Publications
2022 |
Cutchan, Marvin Mc; Giannopoulos, Ioannis Encoding Geospatial Vector Data for Deep Learning: LULC as a Use Case (Journal Article) In: Remote Sensing, vol. 14, no. 12, 2022, ISSN: 2072-4292. (Abstract | Links | BibTeX | Tags: AI, Deep learning, geoinformation, geosemantics, LULC, Volunteered Geographic Information) @article{rs14122812, Geospatial vector data with semantic annotations are a promising but complex data source for spatial prediction tasks such as land use and land cover (LULC) classification. These data describe the geometries and the types (i.e., semantics) of geo-objects, such as a Shop or an Amenity. Unlike raster data, which are commonly used for such prediction tasks, geospatial vector data are irregular and heterogenous, making it challenging for deep neural networks to learn based on them. This work tackles this problem by introducing novel encodings which quantify the geospatial vector data allowing deep neural networks to learn based on them, and to spatially predict. These encodings were evaluated in this work based on a specific use case, namely LULC classification. We therefore classified LULC based on the different encodings as input and an attention-based deep neural network (called Perceiver). Based on the accuracy assessments, the potential of these encodings is compared. Furthermore, the influence of the object semantics on the classification performance is analyzed. This is performed by pruning the ontology, describing the semantics and repeating the LULC classification. The results of this work suggest that the encoding of the geography and the semantic granularity of geospatial vector data influences the classification performance overall and on a LULC class level. Nevertheless, the proposed encodings are not restricted to LULC classification but can be applied to other spatial prediction tasks too. In general, this work highlights that geospatial vector data with semantic annotations is a rich data source unlocking new potential for spatial predictions. However, we also show that this potential depends on how much is known about the semantics, and how the geography is presented to the deep neural network. |
2019 |
Fogliaroni, Paolo; Mazurkiewicz, Bartosz; Kattenbeck, Markus; Giannopoulos, Ioannis Geographic-Aware Augmented Reality for VGI (Inproceedings) In: Gartner, Georg; Huang, Haosheng (Ed.): Advances in Cartography and GIScience of the ICA (ICA-Adv), pp. 3.1–3.9, International Cartographic Association (ICA), 2, 2019, (Vortrag: 15th International Conference on Location Based Services (LBS 2019), Wien; 2019-11-11 -- 2019-11-13). (Abstract | Links | BibTeX | Tags: augmented reality, geoAR, in-situ study, Volunteered Geographic Information) @inproceedings{fogliaroni19:3.1[TUW-282611], Volunteered Geographic Information (VGI) has been a constantly growing field over the last decade, but the utilised technologies (i.e., mobile phones) are not able to exploit the full potential concerning effort and accuracy of registering geographic data. This paper introduces the GeoAR Glasses, a novel technology enabling the use of Geographic-Aware Augmented Reality for Mobile Geographic Information Systems (Mobile GIS) and Location-Based Services (LBS). The potentials of the GeoAR Glasses with respect to current mobile mapping applications is shown by means of an in-situ study (N=42) comparing two different modes of collecting VGI data. For the comparison we take into account the accuracy of the mapped data points and the time needed to complete the mapping. The results show that the GeoAR Glasses outperform the mobile application concerning both positional accuracy and completion time. |