In the camera-monitor mediated world of videoconferencing, the limitations of communications bandwidth and equipment capability tend to place a severe handicap on the senses of sight and sound and eliminate the sense of touch. As a result, even in state of the art videoconference rooms using the highest quality equipment, the sense of co-presence enjoyed by individuals in the same room is never fully achieved. Gaze awareness, recognition of facial gestures, social cues through peripheral or background awareness, and sound spatialization through binaural audio, all important characteristics of multi-party interaction, are often lost in a videoconference. Our objective is to introduce the computer as an intermediary in the communications. At the same time, it is necessary to move from the restricted videoconference environments of television monitors and stereo speakers to immersive spaces in which video fills the participant's visual field and is reinforced by spatialized audio cues. Haptic feedback should be exploited to help bridge the physical separation of remote individuals. This feedback could range from reproducing the floor vibrations in response to a user walking about to the tactile response of a surgeon's instrument as it moves through different tissue. Our testbed, currently in development, consists of a number of audio-insulated rooms, each equipped with high resolution video projectors, cameras, microphones, and multi-channel audio, interconnected by an ATM switch. The video is rear-projected to cover three walls of each room, thereby encompassing the users' visual field and creating the illusion of a larger shared space. Multi-channel audio is used to produce effective spatialization of sound sources, enhancing the sense of co-presence. While 3D rendering will be explored, our emphasis lies in effective use of scaling, perspective transformations, and image blending techniques, to achieve a reasonable sense of co-presence in 2D without the physical constraints of special viewing equipment.