GazePlotter: An open-source solution for the automatic generation of scarf plots from eye-tracking data
Contribution
Introduces a browser-based, installation-free platform that automatically recognises and parses raw exports from six major eye-tracking software tools—removing the need for custom preprocessing scripts or programming expertise.
Provides full support for dynamic AOIs that change position or visibility over time, with per-participant visibility layers, directly addressing a gap in all existing open-source scarf plot tools.
Implements three timeline modes—absolute (ms), relative (normalised duration), and ordinal (segment order)—enabling flexible temporal interpretation without data re-export.
Ensures privacy-preserving, fully client-side computation with no data transmission, supporting use in clinical, educational, and institutional contexts where gaze data must remain within trusted environments.
Validates AOI-level metrics against SMI BeGaze and Tobii Pro Lab within ±1 ms tolerance for temporal measures and exact match for fixation counts, confirming quantitative reliability alongside exploratory utility.
Demonstrates positive user experience across 35 participants with a UEQ-S Overall score in the "Excellent" category and Hedonic Quality placing GazePlotter in the top 10% of the global benchmark dataset.
Publication properties
Citation
Vojtechovska, M., & Popelka, S. (2026). GazePlotter: An open-source solution for the automatic generation of scarf plots from eye-tracking data. Behavior Research Methods, 58, 85. https://doi.org/10.3758/s13428-026-02959-5
Authors
Year
2026
Journal
Behavior Research Methods
Language
EN
Abstract
Questions addressed
Q: What is a scarf plot and what analytical problem does it solve in eye-tracking research?
A: A scarf plot (or sequence chart) represents gaze as colour-coded AOI segments aligned on a shared timeline and stacked by participant. It was introduced by Richardson and Dale (2005) to address limitations of heatmaps and aggregate metrics, which erase the chronological structure of attention. The article frames scarf plots as tools for first-pass interpretation—revealing when participants attend to specific elements, who does so, and in what order—making them particularly informative for process-oriented studies where the sequence and rhythm of information acquisition matter more than cumulative exposure.
Q: How can a remote usability study be instrumented directly inside a web application?
A: The article describes embedding a full evaluation protocol—informed consent, ten task prompts, and a UEQ-S questionnaire—into the application itself. Task success and completion times were captured in real time by hooking into the undo/redo event pipeline, so every logged action corresponded to a verified functional operation rather than self-reported behaviour. This instrumented approach eliminated the need for screen-sharing, external survey tools, or facilitator presence, enabling unmoderated remote collection of both behavioural and attitudinal data within a single session.
Q: Why does temporal flattening in heatmaps misrepresent cognitive strategy?
A: Heatmaps compress the full chronological sequence of gaze into a single static density field, erasing the distinction between early orienting fixations and late evaluative ones. Spatial smoothing further masks individual variability—an outlier's intense focus can propagate as apparent group consensus. The article argues that without a temporal dimension, researchers cannot recover the order, rhythm, or phase structure of attention, which is precisely the information needed to infer strategy differences, learning progressions, or decision-stage transitions.
Q: How can browser-based streaming parse multi-gigabyte eye-tracking files without exhausting memory?
A: Rather than loading entire exports into memory, the article describes a chunk-by-chunk processing pipeline built on the JavaScript ReadableStream API. Each chunk is parsed, transformed, and aggregated incrementally before the next is read. Parsing routines run in parallel Web Workers to avoid blocking the main thread. On a mid-range laptop, a 12.76 GB file (38 participants, 1 320 AOIs) was parsed in under four minutes with heap usage staying below 30 MB—demonstrating that processing time scales with file size while memory remains bounded.
Q: What validation strategy can confirm that an open-source eye-tracking tool produces metrics equivalent to proprietary software?
A: The article applies a four-layer validation pipeline: (1) exploratory inspection of raw parsed segments to catch format-specific edge cases such as off-by-one boundaries and overlapping AOI assignments; (2) unit tests formalising discovered edge cases for regression detection; (3) cross-browser end-to-end tests verifying interface fidelity across Chrome, Firefox, Safari, and Edge; and (4) metric-level benchmarking against SMI BeGaze and Tobii Pro Lab using strict tolerances (±1 ms for temporal metrics, exact match for counts). This layered approach isolates errors from low-level parsing through to high-level aggregation.
Q: How do exportable dashboard states support reproducibility in exploratory gaze analysis?
A: The article highlights that exploratory no-code tools risk interpretive bias if analytic configurations are not documented. GazePlotter addresses this by exporting lightweight JSON files that capture the full analytic state—loaded data, layout positions, filter settings, visualisation parameters, and the software version. Collaborators can reopen the exact configuration on any device without reprocessing the original dataset. The article argues this state-level portability moves beyond static figure sharing toward verifiable, reproducible visual analysis, and recommends that researchers share dashboard files or explicitly report configurations when publishing results.