Gaze controlled maps: scoping review of gaze-based interactions in geovisualisations

/ /
Publication detail

Contribution

Systematic operationalization of gaze-based interactions in geovisualisation through a dual-axis classification framework based on input modality (gaze-only vs. combined) and intent–response relation (active vs. passive).

Application of the PRISMA-ScR protocol to map cartographic gaze-based interaction research using explicitly defined inclusion criteria, iterative query refinement, and task-level data extraction.

Development of a structured typology of cartographic gaze-based interaction patterns (e.g., dwell activation, gaze-contingent rendering, gaze-informed adaptation) grounded in comparative synthesis across heterogeneous implementations.

Integration of multi-level data coding (article-level and interaction-level) to enable parallel analysis of hardware, eye-movement mechanisms, interaction techniques, and evaluation practices.

Design of an open-source, browser-based analytical dashboard for interactive exploration of categorized gaze-based interactions and associated methodological attributes.

Identification of recurring limitations in map-based gaze-based interfaces, including tracking inaccuracy, calibration drift, unintended activation (Midas touch), visual fatigue, and disruption of natural gaze flow.

Publication properties

Citation

Vojtechovska, M., Popelka, S., & Kubíček, P. (2025). Gaze controlled maps: scoping review of gaze-based interactions in geovisualisations. International Journal of Digital Earth, 2510563. https://doi.org/10.1080/17538947.2025.2510563

Authors

M. Vojtechovska, S. Popelka, P. Kubíček

Year

2025

Journal

International Journal of Digital Earth

Language

EN

Abstract

Gaze-based interactions (GBIs) allow hands-free control and richer user experiences across domains. Yet, despite eye-tracking’s diagnostic use in geospatial visualisations, its potential for interactive spatial data exploration is underexplored. By providing a scoping review of the integration of GBIs into geospatial visualisations, we aim to lay the foundation for further research, as no comprehensive review has yet been carried out. Using the PRISMA-ScR framework, we assessed 26 studies employing 54 GBIs. We developed an open-source web dashboard to simplify the interpretation of multiple data items across GBIs. Most GBIs (74.1\%) relied solely on gaze, with 68.5\% using remote eye-trackers. Active interactions dominated (64.2\%), primarily for discrete commands concerning zooming, panning, or selecting map elements. Meanwhile, passive interactions focused on gaze-informed adaptations, such as automatically updating legend content based on in-map attention. Although there were accuracy and unintended activation issues, GBIs often improved the hedonic and pragmatic quality of geovisualisations. Studies would benefit from robust user evaluations that use standardised questionnaires. Broader GBI research solutions, such as combining gaze with other modalities in extended reality, could transform how we interact with geospatial data.

Questions addressed

Q: What are gaze-based interactions in geospatial visualisation?

A: Gaze-based interactions use eye movement data as an input channel to control or adapt interactive maps and spatial visualisations. Eye movements such as point-of-gaze, fixations, or gaze direction can trigger commands or drive interface changes without relying on manual input.

Q: How do active and passive gaze-based interactions differ?

A: Active gaze-based interactions treat eye movements as intentional commands used to directly manipulate a map, such as zooming or selecting features. Passive gaze-based interactions monitor visual attention and adapt the interface automatically, for example by changing displayed detail or contextual information.

Q: What role does dwell time play in gaze-controlled interfaces?

A: Dwell time refers to a minimum fixation duration required before a gaze-triggered action occurs. The technique helps reduce unintended activations but can introduce delays and visual fatigue if thresholds are not well balanced.

Q: Which eye-tracking modalities are commonly used in gaze-enhanced maps?

A: Most gaze-enhanced map systems rely on point-of-gaze data captured by remote eye-trackers, with fewer implementations using head-mounted or extended reality devices. Combined-modality setups may pair gaze with mouse, touch, foot input, or speech to improve control precision.

Q: What methodological challenges arise when evaluating gaze-based map interactions?

A: Common challenges include tracker accuracy, calibration stability, unintended activations, and limited comparability across studies. Evaluation practices often vary, with many relying on informal feedback rather than standardised performance and user experience measures.

Michaela Vojtechovska, CC BY 4.0 Last revised 02.02.2026
ORCID: 0009-0003-6881-1758 mail@vojtechovska.com