VisArch

With ASPLOS 2026
Sunday, March 22, 2026
Pittsburgh, PA, USA
Workshop on Systems and Architectures for Neural Rendering, AR/VR, and Visual Computing.

Call for Papers

Recent advances in neural rendering and embodied visual intelligence are reshaping AR/VR/XR systems, enabling immersive, interactive, and intelligent experiences across virtual and physical environments. This workshop invites original research contributions that span algorithms, systems, and architectures for neural rendering and AR/VR/XR computing.

Important Dates

  • Submission Deadline: February 24, 2026 (Anywhere on Earth)

  • Author Notification: February 28, 2026

Topics of Interest

We invite submissions on topics including, but not limited to:

  • Neural rendering techniques, including Neural Radiance Fields (NeRF), Gaussian Splatting, and learned scene representations.
  • Systems and architectures for AR/VR/XR computing platforms.
  • Low-latency, low-power pipelines for visual sensing, rendering, and display.
  • Hardware acceleration for visual, perception-driven, and neural workloads.
  • Sensor fusion and real-time 3D reconstruction from multimodal inputs.
  • Distributed, edge, and edge–cloud systems for scalable visual processing.
  • Simulation frameworks, benchmarks, and datasets for neural rendering and AR/VR/XR evaluation.
  • Applications in robotics, telepresence, and embodied AI, including interactive and collaborative environments.

Submission Guidelines

  • Papers must be prepared and submitted as a single file: no more than 4 pages for the main paper, with unlimited pages for references, following the ACM format.

  • Authors should use the sigplan proceedings template from the ACM acmart LaTeX class, available on the official ACM website: https://www.acm.org/publications/proceedings-template.

  • Submissions must be anonymous and should not include any author identifying information.

  • Both unpublished and previously published works, as well as works in progress, are welcome.

Additional Information

Schedule

All times listed below are UTC-4 (Eastern Daylight Time). You might want to consult this Time Zone Map to figure out times in your location.

9:00 - 9:45

Keynote 1

Unlocking the Potential of Immersive Computing: An Interdisciplinary, End-to-End Systems Approach

The Richard T. Cheng Professor of Computer Science at the University of Illinois Urbana-Champaign (UIUC)

9:45 - 10:30

Keynote 2

The Future of Extended Reality is Adaptive

Assistant Professor at the Human-Computer Interaction Institute at Carnegie Mellon University (CMU)

10:30 - 11:15

Keynote 3

AI x Hardware for Wearable Contextual AI Systems

Research Scientist at Reality Labs Silicon Research at Meta (Meta)

11:15 - 11:30

Break

11:30 - 12:00

Poster Session

This session also includes two demo presentations:
• Demo 1: Human Vision-Driven Optimizations of AR and VR (from Prof. Yuhao Zhu’s group)
• Demo 2: Long Context Video Understanding for AR and VR (from Prof. Sai Qian Zhang’s group)

Keynote Speakers

Unlocking the Potential of Immersive Computing: An Interdisciplinary, End-to-End Systems Approach

Sarita Adve
The Richard T. Cheng Professor of Computer Science at the University of Illinois Urbana-Champaign (UIUC)
Sarita Adve
Bio.

Sarita Adve is the Richard T. Cheng Professor of Computer Science at the University of Illinois Urbana-Champaign where she directs the campus-wide IMMERSE Center for Immersive Computing. Her research interests span the computing system stack from hardware to applications, with a current focus on extended reality (XR) systems. Her group maintains ILLIXR (Illinois Extended Reality), an open-source XR system and research testbed to democratize XR research, development, and benchmarking. Her work on the data-race-free, Java, and C++ memory models forms the foundation for memory models used in most hardware and software systems today. She is also known for her work on heterogeneous systems and software-driven approaches for hardware resiliency. She is a member of the American Academy of Arts and Sciences, a fellow of the ACM, IEEE, and AAAS, and a recipient of the ACM/IEEE-CS Ken Kennedy award, the SIGARCH Maurice Wilkes award, the Computing Research Association (CRA) distinguished service award for the CARES movement, and the IIT Bombay distinguished alumni award.

Abstract.

Immersive computing – including virtual, augmented, and mixed reality (collectively extended reality or XR) - has the potential to transform most industries and human activities. Immersive systems that provide comfortable, mobile, all day, trustworthy, rich experiences remain a grand challenge. The IMMERSE Center for Immersive Computing at Illinois brings together campuswide expertise in immersive technologies, applications, and human experience to address this grand challenge. I will first briefly describe the wide range of interdisciplinary work enabled by IMMERSE and then focus on end-to-end systems work inspired by such an approach. Central to this systems work is the idea that addressing our grand challenge will require resource-constrained XR devices to harness the compute power of the edge and the cloud over wireless networks, without compromising the human experience. We show some of the first end-to-end distributed XR systems that provide low power, real time head tracking, rendering with 6DoF reprojection, scene reconstruction, semantic understanding, and other XR features by judiciously offloading XR computations over a wireless network without compromising user experience. Much of this work is enabled by the ILLIXR (ILLinois eXtended Reality) open-source end-to-end XR system and research testbed that we designed and maintain to democratize XR systems research. This is joint work with a large number of collaborators.

The Future of Extended Reality is Adaptive

David Lindlbauer
Assistant Professor at the Human-Computer Interaction Institute at Carnegie Mellon University (CMU)
David Lindlbauer
Bio.

David Lindlbauer is an Assistant Professor at the Human-Computer Interaction Institute at Carnegie Mellon University, where he leads the Augmented Perception Lab and co-directs the CMU Extended Reality Technology Center. His research focuses on understanding how humans perceive and interact with digital information, and to build technology that goes beyond the flat displays of PCs and smartphones to advance our capabilities when interacting with the digital world. To achieve this, he creates and studies enabling technologies and computational approaches that control when, where and how virtual content is presented to increase the usability of Augmented Reality and Virtual Reality interfaces. Prof. Lindlbauer holds a PhD from TU Berlin and was a postdoctoral researcher at ETH Zurich before joining CMU. He has published more than 45 scientific papers at premier venues in Human-Computer Interaction such as ACM CHI and ACM UIST. His work has attracted media attention in outlets such as MIT Technology Review, Fast Company Design, and Shiropen Japan.

Abstract.

New advanced interactive technologies such as Extended Reality (XR) have the potential to transform the way we interact with digital information, and promise a rich set of applications, ranging from productivity and architecture to interaction with smart devices. Current approaches to interactive XR systems, however, are static, and users need to manually adjust factors such as the visibility, placement and appearance of their user interface every time they change their task or environment. This is distracting and leads to information overload. To overcome these challenges, we aim to understand and predict how users perceive and interact with digital information, and use this information in context-aware systems that automatically adapt when, where and how to present virtual elements. In this talk, I will present computational and multimodal approaches that leverage contextual knowledge about users, the environment and system capabilities to create advanced interactive experiences. Our systems increase the applicability of XR and advanced interactive systems, with the goal to seamlessly blend the virtual and physical world.

AI x Hardware for Wearable Contextual AI Systems

Vincent T. Lee
Research Scientist at Reality Labs Silicon Research at Meta (Meta)
Vincent T. Lee
Bio.

Vincent T. Lee is a Research Scientist at Reality Labs Silicon Research @ Meta. He received his Ph.D. in Computer Science and Engineering from the University of Washington, specializing in Computer Architecture, in 2019. His current research interests are AI-assisted electronic design automation and methodology, and full system architecture modeling for future generation wearable XR systems. His previous work bridges the gap between hardware design and a broad range of domains including stochastic computing, synthesis and solver-aided techniques, homomorphic encryption, similarity search, and machine perception.

Abstract.

The next generation of human-oriented computing will require always-on, spatially-aware wearable devices to capture egocentric vision and functional primitives (e.g., Where am I? What am I looking at?, etc.). These devices will sense an egocentric view of the world around us to observe all human-relevant signals across space and time to construct and maintain a user’s personal context. This personal context, combined with advanced generative AI, will unlock a powerful new generation of contextual AI personal assistants and applications. However, designing and building a wearable system to support contextual AI is a daunting task because of the system’s stringent power constraints due to weight and battery restrictions, as well as highly complex hardware architecture. To understand how to guide design for such systems, we construct and present the first complete system architecture view of one such wearable contextual AI system (Aria2). We will then look at the technical depth and breadth collectively required to build and implement such a system and highlight some key opportunities for AI-assisted hardware design. By harnessing advancements in AI for hardware, we expect to develop powerful new capabilities to help bridge the semantic gap and automate technical expertise to accelerate the implementation of future wearable systems.

Accepted Papers

The following papers have been accepted for presentation at the workshop.

Workshop Organizers

Sai Qian Zhang
New York University
Yuhao Zhu
University of Rochester
Yang (Katie) Zhao
University of Minnesota, Twin Cities
Nandita Vijaykumar
University of Toronto
Sushant Kondguli
Meta
Jaewoong Sim
Seoul National University
Haiyu Wang
New York University