About Me

I completed my PhD in computer science at the Mobile Robotics Lab at McGill University, supervised by Gregory Dudek and David Meger. My dissertation centers on 3D semantic scene understanding, with a specific focus on camera pose estimation under large viewpoint changes. I now work at the Samsung AI center in Montreal as a research scientist. In my spare time, I work on my aquariums, play the piano, and compose music. My orchestral work (Voices of the Sea) was performed by the Vancouver Symphony Orchestra as part of the Jean Coulthard Readings.

Research Projects

Pose Estimation for Disparate Views

Given RGB images from two far-apart views, I estimate the scale and pose of co-visible objects, while simultaneously estimating the 6-dof camera transformation. Related publications: [ICRA'20] [ICRA'19a] [CRV'18] [IROS'17a]

Unsupervised Learning of Semantic Spatial Relationships

Given a set of scenes containing unlabeled object bounding cuboids, our system learns a set of semantically-meaning spatial concepts that match closely with notions like left, right, on, or facing. Related publication: [ICRA'16]

The Aqua Project

I work with Aqua, a family of amphibious hexapod robots. I have contributed to its vision system, the human-robot interaction layer, and have led numerous dives in which we deployed the robot along the coast of Barbados. The picture above shows me diving with Aqua. See project page. Related publications: [JFR'17] [ICRA'19b] [OCEANS'18] [IROS'17b] [FSR'16] .

Graphical State Space Programming (GSSP)

GSSP is a framework for robot task specification, which allows the user to graphically specify constrains on the robot's state space, and attach procedural code blocks that execute when constraints are satisfied. This approach simplifies the specification of spatial constraints, while maintaining the expressivity of textual code. See project page and video. Related publication: [ICRA'11]



Refereed Conference Papers