Thesis Research Project

``An Automated Robotic System for Synthesis of Image-Based Virtual Reality ''

Supervisor: Professor G. Dudek


Click on the image to see a QuickTime VR Movie which I made of our mobile robotics lab!

------

Introduction

The implementation of an automated technique used for creating image-based Virtual Reality (VR) environments is described in this introduction.

Traditional VR systems use 3D computer graphics to model virtual environments. In the past, traditional VR has been limited to small ``unreal'' looking environments, due in part to the overwhelming complexity of rendering virtual environments in real-time. This overwhelming complexity stems from the fact that the number of surfaces needed to create a real looking environment grows rapidly with the complexity of the scene being modelled. Furthermore, as a result of the number of surfaces needed to create a virtual environment, the creation of such environments is a laborious task, which, for the most part, must be done manually. For these reasons, traditional VR environments tend to stay simplistic and look unrealistic. A real looking virtual human has yet to be modeled by traditional VR techniques.

An alternative to traditional VR techniques is image-based VR. Image-based VR uses images taken from video cameras to achieve a more realistic view of the environment. An example of this type of technology are branching movies. Branching movies are made from multiple movie segments which depict spatial navigation and which are connected together at selected branch points; the user may choose to navigate along different paths at the branching points. Unfortunately, branching movies only allow a limited amount of interaction with the virtual environment.

The automated robotic system presented in this report uses image-based VR to create a 360 degree cylindrical virtual environment. QuickTime VR 2.0 Authoring Tools Suite from Apple Computers, Inc. is, in part, used to create these image-based virtual environments. Once created, the user may explore the cylindrical environment by changing the view point of the camera through panning, tilting, and zooming. The final product gives the effect of looking out at the environment from a given location, and offers the user the ability to look left or right (up to 360 degrees), up or down, and to zoom in or out of a particular region. Several such cylindrical environments may then be connected together to create a complete virtual environment. Although the user may only ``hop'' from one panoramic point to another, this approach still gives a more realistic ``feel'' than other VR techniques. Apple Computers, Inc. is currently working on enhancing the transition when moving from one node to another. One of their strategies is to display a pre-recorded linear movie which depicts the physical motion between the two nodes.

Unfortunately, the creation of such an image-based environment is normally slow and tedious. There are many steps required to create these environments which include: taking numerous precise photographs, scanning the photographs into digital format, ``stitching'' the photographs together into a single seamless cylindrical panoramic image, and then converting the image into viewable format. The system presented in this report automates all the steps required in creating such a cylindrical virtual environment. The result is a realistic virtual environment which may be created quickly and with no human interaction.

One of the motivations for this work is to facilitate the creation process of VR environments - a complicated task which must otherwise be done manually. Further incentive for this work is to improve VR products and make them more accessible to the general public.

There are innumerable applications for this type of technology in areas such as virtual traveling, real-estate, education, and entertainment. For example, a teleoperated or autonomous robot equipped with the system described in this report can be used to explore ``toxic'', remote, or potentially dangerous areas for humans, and create an image-based virtual environment. The virtual environment may then be used for maintenance or detoxification of nuclear power plants or for exploration of remote environments, such as Mars. The system may also be used to create virtual remote environments such as Machu Picchu or the Taj Mahal which would allow virtual travelers to ``visit'' other countries in the comfort of their own homes.

My project focuses on the design and implementation of a software package which automatically creates a complete virtual environment using a mobile robot, a Pan Tilt Unit (PTU), and a digital camera.


Please feel free to send any comments:

Philippe Ciaravola M.Sc.
ciara@cim.mcgill.ca

Last Changed : 6-Jan-1998