ProactiveVA: Proactive Visual Analytics with LLM-Based UI Agent

Yuheng Zhao

Yuheng Zhao1

Xueli Shu

Xueli Shu1

Liwen Fan

Liwen Fan1

Lin Gao

Lin Gao1

Yu Zhang

Yu Zhang2

Siming Chen

Siming Chen1

1 Fudan University · 2 University of Oxford

IEEE VIS 2025

Abstract

ProactiveVA introduces an LLM-powered UI agent that proactively assists analysts in visual analytics tasks by detecting when users may need help, reasoning about their intent, and offering contextual guidance. The framework enables mixed-initiative human–AI collaboration through perception, reasoning, and acting stages. It was implemented in two VA systems and evaluated through algorithmic testing, expert studies, and user studies. ProactiveVA advances proactive, explainable, and controllable AI assistance in visual analytics.


Methods

To design effective proactive assistance, we first conducted a formative study analyzing help-seeking behaviors in user interaction logs. The study revealed when users need proactive help, what assistance they require, and how the agent should intervene. Based on this analysis, we distilled key design requirements in terms of intent recognition, strategy generation, interpretability, and controllability. These findings informed the ProactiveVA framework, which utilizes an LLM-based UI agent to perceive user needs and provide effective solutions through autonomous action execution.

Figure 2. User Behavior Patterns and Help-Seeking Categories

User interaction patterns

The formative study identified various user interaction patterns, each associated with distinct problem categories and assistance types. These empirical insights guided the agent’s perception strategies and the subsequent design of the reasoning and acting stages.

The Three-Stage Workflow of ProactiveVA

The UI agent operates through a three-stage pipeline: Perception, Reasoning, and Acting. The perception module extracts behavioral and semantic cues from users’ interactions and notes. The reasoning module infers user intent and generates a sequence of candidate operations in an iterative loop inspired by the ReAct paradigm. The acting module executes these operations while monitoring system responses, ensuring transparency, explainability, and controllability in proactive support.


System

We designed an integrated interface that bridges the LLM-based UI agent, the VA interface, and users. To support transparent, controllable, and non-intrusive assistance, the interface consists of three components: Chat View, VA Interface, and Notes View: working together to assist users throughout the visual analysis process.

In the Chat View, the agent presents reasoning traces and suggestions in real time, allowing users to preview and approve actions. The VA Interface serves as the central workspace, where proactive tips appear subtly and disappear automatically to minimize disruption. The Notes View records both human and agent findings, enabling verification and collaborative refinement of analytical insights. Together, these elements embody ProactiveVA’s design philosophy: enabling proactive yet interpretable human–AI collaboration.

System interface

Demo Video

This video demonstrates the process of using the ProactiveVA system for visual analysis on the IEEE VAST Challenge Mini Challenge 3 and on Tableau Public dashboards (for example, the profitability analysis dashboard). The video shows how ProactiveVA, through its LLM-driven UI agent, proactively identifies user needs, provides context-relevant suggestions, and executes corresponding actions to enhance users' analytical capabilities and efficiency.

BibTeX

@article{Zhao2025ProactiveVA,
  title={ProactiveVA: Proactive Visual Analytics with LLM-Based UI Agent},
  author={Zhao, Yuheng and Shu, Xueli and Fan, Liwen and Gao, Lin and Zhang, Yu and Chen, Siming},
  journal={IEEE Transactions on Visualization and Computer Graphics (Proc. IEEE VIS 2025)},
  year={2025},
  url={https://zyh1222.github.io/ProactiveVA/}
}