I am a research scientist at Meta Reality Labs in Redmond, Washington. I work on computer vision and machine learning for applications in computational photography.
I completed my PhD at Brown where I was advised by James Tompkin. I received a Fulbright Scholarship for my Masters at the Courant Institute of New York University where my Master's thesis was advised by Ken Perlin.
We propose a method for dynamic scene reconstruction based on a deformable set of 3D Gaussians residing in a canonical space, and a time-dependent deformation field defined by a multi-layer perceptron (MLP)
We present a novel image-guided texture synthesis method to transfer relightable textures from a small number of input images to target 3D shapes across arbitrary categories.
We propose a method for generating tiled multiplane images with only a small number of adaptive depth planes
for single-view 3D photography in the wild.
We aim to estimate temporally consistent depth maps of video streams in an online setting by using a global point cloud along with a learned fusion approach in image space.
We present a comprehensive review of neural fields by providing context, mathematical grounding, and an extensive literature review.
A companion website contributes a living version that can be continually updated by the community.
A method to estimate dense depth by optimizing a sparse set of points such that their diffusion into a depth map minimizes a multi-view reprojection error from RGB supervision.
We propose landmark-based verbal directions as an alternative to mini-maps, and examine the development of spatial knowledge in an
open-world urban game environment.