Gaze controlled maps: scoping review of gaze-based interactions in geovisualisations
Contribution
Systematic operationalization of gaze-based interactions in geovisualisation through a dual-axis classification framework based on input modality (gaze-only vs. combined) and intent–response relation (active vs. passive).
Application of the PRISMA-ScR protocol to map cartographic gaze-based interaction research using explicitly defined inclusion criteria, iterative query refinement, and task-level data extraction.
Development of a structured typology of cartographic gaze-based interaction patterns (e.g., dwell activation, gaze-contingent rendering, gaze-informed adaptation) grounded in comparative synthesis across heterogeneous implementations.
Integration of multi-level data coding (article-level and interaction-level) to enable parallel analysis of hardware, eye-movement mechanisms, interaction techniques, and evaluation practices.
Design of an open-source, browser-based analytical dashboard for interactive exploration of categorized gaze-based interactions and associated methodological attributes.
Identification of recurring limitations in map-based gaze-based interfaces, including tracking inaccuracy, calibration drift, unintended activation (Midas touch), visual fatigue, and disruption of natural gaze flow.
Publication properties
Citation
Vojtechovska, M., Popelka, S., & Kubíček, P. (2025). Gaze controlled maps: scoping review of gaze-based interactions in geovisualisations. International Journal of Digital Earth, 2510563. https://doi.org/10.1080/17538947.2025.2510563
Authors
M. Vojtechovska, S. Popelka, P. Kubíček
Year
2025
Journal
International Journal of Digital Earth
Language
EN
Abstract
Questions addressed
Q: What are gaze-based interactions in geospatial visualisation?
A: Gaze-based interactions use eye movement data as an input channel to control or adapt interactive maps and spatial visualisations. Eye movements such as point-of-gaze, fixations, or gaze direction can trigger commands or drive interface changes without relying on manual input.
Q: How do active and passive gaze-based interactions differ?
A: Active gaze-based interactions treat eye movements as intentional commands used to directly manipulate a map, such as zooming or selecting features. Passive gaze-based interactions monitor visual attention and adapt the interface automatically, for example by changing displayed detail or contextual information.
Q: What role does dwell time play in gaze-controlled interfaces?
A: Dwell time refers to a minimum fixation duration required before a gaze-triggered action occurs. The technique helps reduce unintended activations but can introduce delays and visual fatigue if thresholds are not well balanced.
Q: Which eye-tracking modalities are commonly used in gaze-enhanced maps?
A: Most gaze-enhanced map systems rely on point-of-gaze data captured by remote eye-trackers, with fewer implementations using head-mounted or extended reality devices. Combined-modality setups may pair gaze with mouse, touch, foot input, or speech to improve control precision.
Q: What methodological challenges arise when evaluating gaze-based map interactions?
A: Common challenges include tracker accuracy, calibration stability, unintended activations, and limited comparability across studies. Evaluation practices often vary, with many relying on informal feedback rather than standardised performance and user experience measures.