Spatially Programmable Lensing

Date: 2026/04/01 – 2026/04/01

 

Academic Seminar: Spatially Programmable Lensing

Speaker: Yingsi, Ph.D. candidate in Electrical and Computer Engineering at Carnegie Mellon University

Time: 9:00-10:00 a.m., April 1, 2026 (Beijing Time)

Location: online

Abstract:
At the heart of most optical systems is a lens that produces a single fixed focal plane. This assumption limits both imaging and displays: cameras must trade off aperture, depth of field, and diffraction, while VR displays project all pixels to a single accommodation distance, preventing natural focus cues and causing visual discomfort. These challenges exist because a lens creates a single global focal plane for all pixels—one that can move but cannot change shape.

This talk explores a new class of optical systems that challenges this convention by allowing spatially varying focus across the sensor or display. The first part introduces Split-Lohmann Multifocal Display, a near-eye display that can simultaneously place individual pixels to different depths, fully supporting the natural focusing of the eye. This technique enables real-time streaming of 3D content over a large depth range at high spatial resolution, offering an exciting step towards a more immersive and interactive 3D viewing experience.

The second part presents Spatially-Varying Autofocus, a technique that focuses different parts of the sensor onto different depths in the scene, enabling an arbitrary focal surface while maintaining a large aperture and high spatial resolution. Our prototype demonstrates real-time, optical all-in-focus imaging.

Together, these optical systems advance the idea of depth as a spatially programmable dimension, opening new possibilities for imaging and display applications, including 3D perception, autonomous driving, microscopy, and immersive displays.

Biography:

Yingsi is a PhD candidate in Electrical and Computer Engineering at Carnegie Mellon University, advised by Prof. Aswin Sakaranarayanan and Prof. Matthew O’Toole. Yingsi is broadly interested in embedding fundamental physical laws, such as complex light transport and 3D geometry, into computational frameworks and generative algorithms to better understand and interact with the visual world. Her work in novel programmable imaging and 3D display systems demonstrates how strong physical priors integrated with programmability can overcome the limits of traditional sensing and perception. Specifically, her research introduces spatially adaptive cameras and displays, building the foundation for next-generation machine vision, computational imaging, and immersive displays. Her research area involves a fusion of computer vision, 3D perception, signal processing, optics, and machine learning. Yingsi’s work has been recognized with the Best Paper Award at SIGGRAPH 2023, the Best Demo Award at ICCP 2023, and the Best Paper (Marr prize) Honorable Mention Award at ICCV 2025. Yingsi is also a recipient of the Tan Endowed Graduate Fellowship and the James Sprague Presidential Fellowship at Carnegie Mellon University.

Prior to CMU, Yingsi obtained her Bachelor of Science in Computer Science from Columbia University and her Bachelor of Arts in Physics from Colgate University. She was a research intern at Meta Reality Labs in the Display Systems Research team (2024, 2025) and Snap Research in the Computational Imaging team (2020). She was also a software engineering intern at Google Search (2019).