Publications
2022 |
Alinaghi, Negar; Kattenbeck, Markus; Giannopoulos, Ioannis I Can Tell by Your Eyes! Continuous Gaze-Based Turn-Activity Prediction Reveals Spatial Familiarity (Inproceedings) In: pp. 2:1–2:13, Schloss Dagstuhl -- Leibniz-Zentrum für Informatik, 2022, ISBN: 978-3-95977-257-0. (Abstract | Links | BibTeX | Tags: eye tracking, human activity recognition, Machine Learning, Spatial Familiarity) @inproceedings{alinaghi2022can, Spatial familiarity plays an essential role in the wayfinding decision-making process. Recent findings in wayfinding activity recognition domain suggest that wayfinders' turning behavior at junctions is strongly influenced by their spatial familiarity. By continuously monitoring wayfinders' turning behavior as reflected in their eye movements during the decision-making period (i.e., immediately after an instruction is received until reaching the corresponding junction for which the instruction was given), we provide evidence that familiar and unfamiliar wayfinders can be distinguished. By applying a pre-trained XGBoost turning activity classifier on gaze data collected in a real-world wayfinding task with 33 participants, our results suggest that familiar and unfamiliar wayfinders show different onset and intensity of turning behavior. These variations are not only present between the two classes -familiar vs. unfamiliar- but also within each class. The differences in turning-behavior within each class may stem from multiple sources, including different levels of familiarity with the environment. |
2021 |
Alinaghi, Negar; Kattenbeck, Markus; Golab, Antonia; Giannopoulos, Ioannis Will You Take This Turn? Gaze-Based Turning Activity Recognition During Navigation (Inproceedings) In: Janowicz, Krzysztof; Verstegen, Judith A. (Ed.): 11th International Conference on Geographic Information Science (GIScience 2021) - Part II, pp. 5:1–5:16, Schloss Dagstuhl -- Leibniz-Zentrum für Informatik, Dagstuhl, Germany, 2021, ISSN: 1868-8969. (Abstract | Links | BibTeX | Tags: eye tracking, human activity recognition, Machine Learning, wayfinding) @inproceedings{alinaghi_et_al:LIPIcs.GIScience.2021.II.5, Decision making is an integral part of wayfinding and people progressively use navigation systems to facilitate this task. The primary decision, which is also the main source of navigation error, is about the turning activity, i.e., to decide either to turn left or right or continue straight forward. The fundamental step to deal with this error, before applying any preventive approaches, e.g., providing more information, or any compensatory solutions, e.g., pre-calculating alternative routes, could be to predict and recognize the potential turning activity. This paper aims to address this step by predicting the turning decision of pedestrian wayfinders, before the actual action takes place, using primarily gaze-based features. Applying Machine Learning methods, the results of the presented experiment demonstrate an overall accuracy of 91% within three seconds before arriving at a decision point. Beyond the application perspective, our findings also shed light on the cognitive processes of decision making as reflected by the wayfinder’s gaze behaviour: incorporating environmental and user-related factors to the model, results in a noticeable change with respect to the importance of visual search features in turn activity recognition. |
2019 |
Gokl, Lukas; McCutchan, Marvin; Mazurkiewicz, Bartosz; Fogliaroni, Paolo; Giannopoulos, Ioannis Towards Urban Environment Familiarity Prediction (Inproceedings) In: Gartner, Georg; Huang, Haosheng (Ed.): Advances in Cartography and GIScience of the ICA (ICA-Adv), pp. 5-1–5-8, International Cartographic Association (ICA), 2, 2019, (Vortrag: 15th International Conference on Location Based Services (LBS 2019), Wien; 2019-11-11 -- 2019-11-13). (Abstract | Links | BibTeX | Tags: environment familiarity, Machine Learning, virtual environment) @inproceedings{gokl19:5-1[TUW-282604], Location Based Services (LBS) are definitely very helpful for people that interact within an unfamiliar environment, but also for those that already possess a certain level of familiarity with it. In order to avoid overwhelming familiar users with unnecessary information, the level of details offered by the LBS shall be adapted to the level of familiarity with the environment: providing more details to unfamiliar users and a lighter amount of information (that would be superfluous, if not even misleading) to the users that are more familiar with the current environment. Currently, the information exchange between the service and its users is not taking into account familiarity. Within this work, we investigate the potential of machine learning for a binary classification of environment familiarity (i.e., familiar vs unfamiliar) with the surrounding environment. For this purpose, a 3D virtual environment based on a part of Vienna, Austria was designed using datasets from the municipal government. During a navigation experiment with 22 participants we collected ground truth data in order to train four machine learning algorithms. The captured data included motion and orientation of the users as well as visual interaction with the surrounding buildings during navigation. This work demonstrates the potential of machine learning for predicting the state of familiarity as an enabling step for the implementation of LBS better tailored to the user. |
2018 |
McCutchan, Marvin; Giannopoulos, Ioannis Geospatial Semantics for Adaptive Interaction (Inproceedings) In: Kuhn, Werner; Kemp, Karen; others, (Ed.): GIScience 2018 - Workshop on Core Computations on Spatial Information, pp. 1:1–1:4, 2018, (Vortrag: GIScience 2018 - Workshop on Core Computations on Spatial Information, Melbourne, Australien; 2018-08-28). (Abstract | BibTeX | Tags: Geospatial semantics, Linked Data, Machine Learning, spatial prediction) @inproceedings{mccutchan18:1:1[TUW-277897], This work presents a concept for adaptive interaction dialogues which are based on geospatial semantics and machine learning. The proposed system should enable users to efficiently and effectively interact with their surrounding environment. Through this adaptive interaction dialogues the users should be able to ask relevant questions in a more natural way. |
McCutchan, Marvin; Giannopoulos, Ioannis Geospatial Semantics for Spatial Prediction (Inproceedings) In: Winter, Stephan; Griffin, Amy; Sester, Monika (Ed.): Proceedings 10th International Conference on Geographic Information Science (GIScience 2018), pp. 45:1–45:6, LIPICS, 114, 2018, ISBN: 978-3-95977-083-5, (Vortrag: 10th International Conference on Geographic Information Science (GIScience 2018), Melbourne; 2018-08-28 -- 2018-08-31). (Abstract | Links | BibTeX | Tags: Geospatial semantics, Linked Data, Machine Learning, spatial prediction) @inproceedings{mccutchan18:45:1[TUW-271425], In this paper the potential of geospatial semantics for spatial predictions is explored. Therefore data from the LinkedGeoData platform is used to predict landcover classes described by the CORINE dataset. Geo-objects obtained from LinkedGeoData are described by an OWL ontology, which is utilized for the purpose of spatial prediction within this paper. This prediction is based on an association analysis which computes the collocations between the landcover classes and the semantically described geo-objects. The paper provides an analysis of the learned association rules and finally concludes with a discussion on the promising potential of geospatial semantics for spatial predictions, as well as potentially fruitful future research within this domain. |
Giannopoulos, Ioannis Pedestrian Navigation: What Can We Learn From Eye Tracking, Mixed Reality and Machine Learning (Journal Article) In: Österreichische Zeitschrift für Vermessung und Geoinformation (VGI), vol. 106, no. 3, pp. 220–225, 2018. (Abstract | BibTeX | Tags: eye-tracking, Machine Learning, Mixed Reality, navigation) @article{giannopoulos18:220[TUW-274759], Un verschiedene Prozesse wie zum Beispiel die Navigation zu verstehen, ist es entscheidend zu verstehen wie Menschen mit ihrer Umgebung während der Entscheidungsfindung interagieren. Während der räumlichen Entscheidungsfindung interagieren Menschen auch mit räumlichen Daten, die ihnen oft über Display Geräte präsentiert werden.Mit Hilfe von Eye Tracking, Mixed Reality und Machine Learning sind wir in der Lage, ein besseres Verständnis und eine Optimierung der relevanten Interaktionsdialoge zu erzielen, relevante Informationsräume zu klassifizieren sowie Menschen während des Entscheidungsfindungsprozesses zu assistieren. |