David Meger
Assistant Professor
School of Computer Science
McGill University
Montreal, Quebec, Canda
Google Scholar Page

Contact


Email: dmeger@cim.mcgill.ca
Office: Room 112N, McConnell Engineering Building
3480 University Street, Montreal, QC
H3A 0E9
Office Phone: +1 (514) 398 3743

Teaching


Journal Papers


  • David Meger, Per-Erik. Forssén, Kevin Lai, Scott Helmer, Sancho McCann, Tristram Southey, Matthew Baumann, James J. Little, David G. Lowe. "Curious George: An Attentive Semantic Robot". Robotics and Autonomous Systems Journal, Volume 56, Number 6, Pages 503-511. June 2008. [pdf]
  • Ioannis Rekleitis, David Meger, and Gregory Dudek. "Simultaneous Planning, Localization, and Mapping in a Camera Sensor Network". Robotics and Autonomous Systems , 54(11), pages 921--932, November 2006. [pdf]

Refereed Conference Papers


  • (Best paper award finalist) David Meger, Juan Camilo Gamboa Higuera, Anqi Xu, Philippe Giguère and Gregory Dudek. "Learning Legged Swimming Gaits from Experience". The International Conference on Robotics and Autonomous Systems (ICRA), 2015. [project page]
  • Travis Manderson, Jimmy Li, David Cortés Poza, Natasha Dudek, David Meger and Gregory Dudek. "Towards Autonomous Robotic Coral Reef Health Assessment". The Conference on Field and Service Robotics (FSR), 2015.
  • David Meger, Florian Shkurti, David Cortés Poza, Philippe Giguere, Gregory Dudek. "3D Trajectory Synthesis and Control for a Legged Swimming Robot". The International Conference on Robotics and Intelligent Systems (IROS), 2014. [project page]
  • Anqi. Xu, Qiwen Zhang, David Meger, and Gregory Dudek. "Interactive Autonomous Driving through Adaptation from Participation". The International Conference on Intelligent Unmanned Systems (ICIUS), 2014. [pdf] [bibtex]
  • Doug Cox, Darren Fairall, Neil MacMillan, Dimitri Marinakis, David Meger, Saamaan Pourtavakoli, and Kyle Weston. "Trajectory Inference using a Motion Sensing Network". Computer and Robot Vision (CRV), 2014. [pdf]
  • Sina Radmard, David Meger, Elizabeth A. Croft, and James J. Little. "Overcoming Unknown Occlusions in Eye-in-Hand Visual Search". The International Conference on Robotics and Automation (ICRA), 2013. [pdf]
  • Michael Stark, Jonathan Krause, Bojan Pepik, David Meger, James J. Little, Bernt Schiele, and Daphne Koller. "Fine-Grained Categorization for 3D Scene Understanding". The 23rd British Machine Vision Conference (BMVC). Surrey, UK. September 3rd-7th, 2012.[pdf] [bibtex]
  • David Meger and James J. Little. "The UBC Visual Robot Survey: A benchmark for robot category recognition". The International Symposium on Experimental Robotics (ISER). Quebec City, Canada. June 21st, 2012. [pre-print]
  • Sina Radmard, David Meger, Elizabeth Croft, James J. Little. "Overcoming Occlusions in Eye-In-Hand Visual Search". The American Control Conference (ACC). Montreal, Canada. June, 2012. [ieeexplore]
  • Parnian Alimi, David Meger, and James J. Little. "Object Persistence in 3D for Home Robots". The Semantic Perception, Mapping, and Exploration (SPME) workshop at ICRA. St. Paul, United States. May 14th, 2012. [pdf]
  • David Meger, Christian Wojek, Bernt Schiele, and James J. Little. "Explicit Occlusion Reasoning for 3D Object Detection". The 22nd British Machine Vision Conference (BMVC). August 29th - September 2nd, 2011. [final paper] [1-page abstract]
  • David Meger and James J. Little. Mobile 3D Object Detection in Clutter. Oral at the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). San Fransisco, United States. September 25-30, 2011. [final paper] [overview slide]
  • Scott Helmer, David Meger, Marius Muja, James J. Little, and David G. Lowe. Multiple Viewpoint Recognition and Localization. Oral at the Asian Computer Vision Conference. Queenstown, New Zealand. November 8-12, 2010. [pdf]
  • David Meger, Ankur Gupta, and James J. Little. "Viewpoint Detection Models for Sequential Embodied Object Category Recognition". International Conference on Robotics and Automation, ICRA 2010. [pdf]
  • David Meger, Marius Muja, Scott Helmer, Ankur Gupta, Catherine Gamroth, Tomas Hoffman, Matthew Baumann, Tristram Southey, Pooyan Fazli, Walter Wohlkinger, Pooja Viswanathan, James J. Little, David G. Lowe, and James Orwell. "Curious George: An Integrated Visual Search Platform". Computer and Robot Vision, CRV 2010. [pdf]
  • David Meger, Dimitri Marinakis, Ioannis Rekleitis, Gregory Dudek. "Inferring a Probability Distribution Function for the Pose of a Sensor Network using a Mobile Robot". International Conference on Robotics and Automation, ICRA 2009. [pdf]
  • Pooja Viswanathan, David Meger, Tristram Southey, James J. Little and Alan Mackworth. "Automated Spatial-Semantic Modeling with Applications to Place Labeling and Informed Search". Canadian Robot Vision, CRV 2009. [pdf]
  • Babak Ameri, David Meger, Keith Power, and Yang Gao. "UAS Applications: Disaster & Emergency Management". American Society for Photogrammetry and Remote Sensing conference, ASPRS 2009. [pdf]
  • Per-Erik Forssén, David Meger, Kevin Lai, Scott Helmer, James J. Little, David G. Lowe. "Informed Visual Search: Combining Attention and Object Recognition". International Conference on Robotics and Automation, ICRA 2008.[pdf]
  • David Meger, Ioannis Rekleitis, Gregory Dudek. "Heuristic Search Planning to Reduce Exploration Uncertainty". IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2008.[pdf]
  • Dimitri Marinakis, David Meger, Ioannis Rekleitis, and Gregory Dudek. "Hybrid Inference for Sensor Network Localization using a Mobile Robot". American Association for Artificial Intelligence, AAAI 2007. [pdf]
  • David Meger, Per-Erik Forssén, Kevin Lai, Scott Helmer, Sancho McCann, Tristram Southey, Matthew Baumann, James J. Little, David G. Lowe and Bruce Dow. "Curious George: An Attentive Semantic Robot". IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2007 Workshop: From sensors to human spatial concepts November 2007. [abstract]
  • David Meger, Ioannis Rekleitis, and Gregory Dudek. "Autonomous Mobile Robot Mapping of a Camera Sensor Network". The 8th International Symposium on Distributed Autonomous Robotic Systems (DARS), July 2006, Minnesota, Mineapolis, Page(s):155--164. [pdf]

Theses


  • David Meger. "Visual Object Recognition for Mobile Platforms". Doctor of Philosophy, The University of British Columbia. Supervised by Dr. James J. Little. PhD committee: David G. Lowe, Nando de Freitas, Elizabeth Croft. [pdf] [UBC library online thesis access]
  • David Meger. "Planning, Localization, and Mapping for a Mobile Robot in a Camera Network". Master of Science Thesis, McGill University. Dean's Honor List. Supervised by Ioannis Rekleitis and Gregory Dudek. [pdf]
  • David Meger. "Feature Based Human Face Detection and Tracking". UBC Honours Computer Science Thesis (undergraduate). Supervised by David Lowe and Elizabeth Croft.

Non-Refereed Contributions and Abstracts


  • Scott Helmer, David Meger, Pooja Viswanathan, Sancho McCann, Matthew Dockrey, Pooyan Fazli, Tristram Southey, Marius Muja, Michael Joya, Jim Little, David Lowe, Alan K. Mackworth. "Semantic Robot Vision Challenge : Current State and Future Directions" , The IJCAI-09 Workshop on Competitions in Artificial Intelligence and Robotics, 2009. [pdf]
  • Scott Helmer, David Meger, Per-Erik Forssén, Tristram Southey, Sancho McCann, Pooyan Fazli, James J. Little, David G. Lowe. "The UBC Semantic Robot Vision System". AAAI07: Mobile Robot Competition and Exhibition, Abstract. Pages 1983-1984. July 2007. [abstract]
  • Scott Helmer, David Meger, Per-Erik Forssén, Sancho McCann, Tristram Southey, Matthew Baumann, Kevin Lai, Bruce Dow, James J. Little, David G. Lowe. "Curious George: The UBC Semantic Robot Vision System". AAAI Tech Report numbered IAAAI-WS-08-XX. October 2007. [abstract]

Adaptive Gait Control for Swimming Robots


Legged swimming robots, such as the Aqua platform, present a difficult control problem due to fluid dynamics, force effects of varying time-scales, and the general challenge of working underwater. As a postdoc at McGill's Mobile Robotics Lab, I have lead a project to improve Aqua's control systems, which has involved the use of state-of-the-art techniques in learning from reinforcement.

We began by extending Aqua's control system to better handle 3D motions such as barrel rolls and cork-screws. We have also considered the HRI problem facing a diver programming the robot to execute such motions productively. Our results appeared at IROS 2014 ([3D Trajectory Synthesis and Control for a Legged Swimming Robot]).

Recently, we have demonstrated the learning of novel gait types from experience, based on Gaussian Process dynamics models learned from experience data. This work was a best paper award finalist at ICRA 2015 ([Learning Legged Swimming Gaits from Experience]).

Multi-view 3D Object Recognition


3D object recognition

The main focus of my PhD thesis was recognizing objects along with their 3D location, scale and pose using sensor data from multiple views. I considered this problem in indoor kitchen scenes using domestic robots and in outdoor urban driving scenes using on-board data from cars. The methods developed include probabilistic models that combine geometry and learned appearance information derived from labelled training images.

The use of 3D geometry is particularily effective for recognizing objects that are partially occluded, such as a bowl on a messy table, or the cars in a parking lot. In some of the cases that I've studied, my methods outperform the state-of-the-art for this task, and the work continues to make the methods more general and scalable.

The UBC Visual Robot Survey dataset - (UBCVRS)


I have collected a large dataset of geometrically registered robot data of kitchen scenes called the UBC Visual Robot Survey (VRS). The goal of this dataset is to allow "simulation with real data" for the task of determining the objects present nearby a robot that moves through an environment.

Dataset features include the ability to query for numerous "views" of every scene, and the possibility to give simulated control actions that select from the data in a way comparable to what a robot would really see for the requested path. The data is annotated to allow experimental comparison against ground truth. All of the data, instructions and support code are available to the community at the [UBC VRS project page] .

Object Recognition Contest - (SRVC)


Curious George Robot Image

Many important real-world tasks require automated visual understanding, but this problem is usually studied in controlled lab environments. In order to test our methods in the wild, a team of graduate students, including myself, have built a system we call [Curious George].

This robot platform targeted at being an active explorer, like its namesake, has entered the [Semantic Robot Vision Challenge], an international contest requiring robots to find items in the world based on training data collected automatically from the internet. Our team placed first in the robot league of the SRVC for 2007 and 2008 and won the software league in the most recent 2009 contest, outperforming competitors from other institutions from around the globe.

Geo-Spatial Intelligent Decision Systems for Disaster Management - (GIDE)


Aerial Map Image

Rapid access to information is essential during disaster situations. At [GEOSYS Technology Solutions], I've been developing an automated surveillance system that collects aerial images using an Unmanned Aerial Vehicle (UAV), transfers and processes these images in realtime, and delivers an up-to-date view of the situation to disaster managers. Sub-problems in this project include automated feature matching and bundle adjustment to recover accurate vehicle pose information, and a pipelined (parallel) image processing architecture to deliver data as rapidly as possible.

Check out my project page for an update on the most recent work on UAV mapping [here].

Mapping of a Camera Sensor Network with a Mobile Robot


Sensor Network Image

During my M.Sc. at McGill University's [Mobile Robotics Laboratory], I worked on a project which utilized images from static cameras in an environment (such as a building security system) to aid a mobile robot with mapping and navigation, as well as allowing a map of the camera locations to be constructed. We're continuing to collaborate on this project as it's proven to be an excellent test scenario to explore robust position inference methods, and planning approaches that allow a robot to explore an environment with minimal uncertainty in its resulting map estimates.