Milgram, Rastogi, and Grodski: Telerobotic Control Using Augmented Reality

Proceedings 4th IEEE International Workshop on Robot and Human Communication (RO-MAN'95) 5-7 July, 1995, Tokyo

0-7803-2904-x/95 $4.00 1995 IEEE

Telerobotic Control Using Augmented Reality

Paul Milgram, Anu Rastogi, Julius J. Grodski*

Ergonomics in Teleoperation and Control (ETC) Lab

Department of Industrial Engineering, University of Toronto

Toronto, Ontario, Canada M5S 1A4

milgram@argos.rose.utoronto.ca anu@argos.rose.utoronto.ca jul@dciem.dnd.ca

ABSTRACT

A taxonomy for classifying human mediated control of remote manipulation systems is proposed, based on three dimensions: degree of machine autonomy, level of structure of the remote environment, and extent of knowledge, or modellability, of the remote world. For certain unstructured and thus difficult to model environments, a case is made for remote manipulation by means of director/agent control, rather than telepresence. The ARGOS augmented reality toolkit is presented, as a means for gathering quantitative spatial information about – i.e., for interactively creating a partial model of – a remotely viewed 3D worksite. This information is used for off-line local programming of the remote manipulator – i.e. Virtual Telerobotic Control - and when ready the final commands are transmitted for execution to the manipulator control system.

1. INTRODUCTION: A TAXONOMY OF REMOTE CONTROL

As the sophistication of technologies for managing robotic operations in remote unstructured environments continues to grow, thereby facilitating procedures in domains such as hazardous waste handling, subsea pipeline inspection and repair, battlefield robotics, and construction in space, most who are familiar with the limitations of the relevant advanced control technologies concur that complex robotic tasks such as these are not likely in the foreseeable future to be achievable with significantly high degrees of autonomy, due to cost constraints, technological limitations, and/or high levels of operational uncertainty. It is clear, therefore, that some form of mediation by a human operator will be required for some time to come.

In order to carry out comparisons between different control schemes, it is useful to recognise some of the distinctions which characterise the circumstances and control strategies defining these operations. It is important in other words that conventionally accepted terminology be used, and thus that some form of operational taxonomy be adopted.

Levels of Autonomy

One of the key aspects that must be considered is the role played by the human operator (HO), from the point of view of trading off decision making and control authority between the human and other elements of the control system [e.g. Burtnyk & Greenspan, 1991]. In general, for anything other than completely autonomous systems, a spectrum of candidate HO roles can be envisaged, ranging anywhere from manual to supervisory control, as illustrated in Fig. 1. A thorough treatment of the various control schemes shown can be found in [Sheridan, 1992].


Fig. 1. Taxonomy of autonomy in remote

operations

Manual teleoperation, the most basic operating mode, defines all situations in which the HO is constrained to remain continuously in the control loop. If the HO stops controlling, the loop is opened. While emphasising that the continuum shown in Fig. 1 is not intended necessarily to reflect relative levels of technological sophistication, we note that the rapidly emerging domain of advanced telepresence technologies [e.g. Tachi et al, 1990] actually places the HO rather low down on the autonomy continuum. This is because the objective behind telepresence, or tele-existence, is to provide the means for the HO to influence remote operations as if she were actually present at that worksite. Typically this involves some form of master-slave control system, where all actions of the master arm initiated by the HO are mimicked by the slave manipulator. Clearly, in terms of autonomy, this is very close to the closed loop manual teleoperation case; when the HO ceases to operate, the loop is effectively opened.

As indicated in Fig. 1, supervisory control describes a wide range of options, according to which the HO can take on a variety of supervisory roles [Sheridan, 1992]. Director / Agent (D/A) control, as described for example by Zhai & Milgram [1991; 1992], can be considered a basic form of supervisory control, where the human operator acts as a director and the limited intelligence robot acts as her agent. With such a scheme it is not necessary for the HO to feel present at the remote worksite, but may rather feel herself adjacent to the robot. In addition to alleviating the need to remain continuously in control, which might in fact become rather tedious for tasks which are either lengthy or which execute at uncomfortably slow rates, D/A control places less stringent technological requirements on the design of the human/robot (H/R) interface. Not only is high fidelity master-slave control no longer necessary, but the requirement for the HO to be immersed in the remotely viewed environment by means of a head-mounted display (HMD) system may also be relaxed, such that simpler monitor-based display systems, with which the HO looks in at the remote world, may suffice quite well.

Structured vs Unstructured Worlds

Another of the key aspects used to distinguish one operating environment from another is the presence or absence of structure. Generally speaking, as shown in Fig. 2, a structured environment is one made up of known objects. In highly structured worlds, these may be situated at predetermined locations and at known orientations. In completely unstructured worlds, none of the objects situated there are knownbeforehand. As shown in Fig. 2, a continuum of degree of structure can be assumed for describing all cases between the two extremes.


Fig. 2. Taxonomy of structure of remote worlds

Extent of World Knowledge

Closely related to the degree of structure of the remote world is the ability to model it. Clearly, the investment necessary for making a model of a structured environment is less than for that of an unstructured one, and there are many practical advantages to doing so. For taxonomic purposes, therefore, we consider as our final dimension the extent of prior knowledge which is known about the world in which operations are to be carried out. This dimension is illustrated in Figure 3.

At one end of the spectrum, the case "remote world fully modelled" refers to situations in which the robotic environment and the objects being manipulated within it are largely invariant and well known in advance, and the operational procedures are prespecified. Such a case would describe many industrial robotic applications, for example, in which the robots, the worksite and the parts being manipulated have presumably been modelled and fabricated to a known degree of precision. Similarly, this case might also describe projected future teleoperations in space. Clearly, the case of fully modelled worlds is highly correlated with the case of highly structured worlds shown in Fig. 2.

The opposite end of the spectrum in Fig. 3. refers to remote environments in which no prior knowledge is available, thereby making that world impossible to model. This case is obviously more likely (but not necessarily) to correspond to an unstructured environment. The continuum of intermediate cases, as illustrated in Fig. 3, covers a variety of situations in which some knowledge of the remote world has been obtained. For example, prior knowledge might be had about the sizes, shapes, etc. of objects to be manipulated, while nothing is known about where these objects are located. Alternatively, particular object locations might be known, while nothing is known about what the associated objects actually are, or what relevance they might have to an operation. It is important to point out that as soon as any data whatsoever about the remote world are acquired, it must cease to be considered as a completely unmodelled world and may instead be referred to as “partially modelled”.


Fig. 3. Taxonomy of world knowledge for defining remote operations


Teleoperation in Unstructured Environments

The taxonomy of control autonomy, world structure and prior world knowledge established above can now be consolidated to facilitate a discussion of global issues associated with remote control in unstructured environments, according to the framework shown in Fig. 4. The shaded area in the top right hand corner represents the case in which the robotic environment and the objects being manipulated are well known in advance. In this highly structured case the remote world can be accurately modelled, thus making preprogrammed operations feasible. The human operator can now perform as a supervisor, who monitors operations and intervenes only when abnormalities are detected.

The region shown at the bottom left, on the other hand, results when the remote world is completely unstructured, as is the case for example with one of the newest frontiers in teleoperation, telemedicine (especially telesurgery). In general, essentially nothing might be known about the operating environment, uncertainties might be much larger, and the required procedures may themselves in fact vary or the constraints change. Since such worksites can not easily be modelled, the level of control autonomy must be low and a relatively high degree of human involvement is to be expected. To perform such tasks efficiently, good visual, auditory and, if possible, haptic cues are required to provide the operator with a veridical sense of the remote environment – i.e. telepresence.


Fig. 4: Consolidated taxonomy of remote control, according to structure of remote world, extent of world knowledge (EWK) and degree of autonomy. VTC indicates the area in the taxonomic space for which Virtual Telerobotic Control is intended.


For the large range of cases within the cube of Fig. 4 corresponding to environments which are less than highly structured, as in Fig. 2, some degree of supervisory control involving autonomous preprogrammed operations is still conceivably possible, but only when sufficiently reliable local sensors are available for creating and maintaining adequate information about the environment which is necessary for machine support [Schebor & Turney, 1991]. Even though such systems are capable of assisting the operator substantially, the cost is potentially very high and they may be robust for only a limited set of conditions in the environment within which the machine intelligence is capable of working. Current telerobotic systems based on machine vision, for example, are capable at a practical level of recognising only predefined objects in the scene [Spofford et al, 1991] and thus fail to provide the flexibility needed for effective teleoperation in unstructured environments.

To overcome such problems, realistic teleoperation tasks should be appropriately allocated between the human operator and machine intelligence [Burtnyk & Greenspan, 1991; Zhai & Milgram, 1992], such that the respective capabilities of each are efficiently utilised. The reasoning behind such a division of responsibilities is that in general humans are more suited for higher level perception and reasoning, task conceptualisation, understanding the environment and dealing with unusual circumstances, while machines are good at low level sensory control functions, precision, reliability and computationally intensive tasks.

Our approach to telerobotic control in unstructured environments therefore seeks to allocate tasks such that the HO can perform optimally and in harmony with machine capabilities. It aims to help the human to generate and update remote environmental models on-line, monitor interactions of the tele-robot with its environment and control the whole system in an ecologically sound manner, consistent with the Director/Agent metaphor. The intended result is a flexible and robust teleoperation system from the viewpoint of environmental uncertainties, which provides shared control between the operator and the automated support system.

In the remainder of this paper we present the concept of "virtual telerobotic control" (VTC) of remote manipulation systems for unstructured environments [Rastogi et al, 1993]. In terms of the taxonomy discussed above, the region of intended application of VTC is indicated in Fig. 4. Whereas the domain of the horizontal axis (extent of world structure) is meant to be essentially the same as that of manual teleoperation / telepresence, we see that both the extent of world knowledge (EWK) and autonomy axes have schematically been extended somewhat further in the direction of the top right hand corner. With respect to the EWK axis, this is because the VTC approach relies on a powerful set of tools for building up a partial world model of the remote worksite (see Fig. 3). With respect to the autonomy axis, this is because VTC brings the HO one step closer to being a supervisory controller. This is because the resulting partial world model can be used as a means of first performing off-line programming of the robotic system to establish a set of open loop commands which can subsequently be communicated to the manipulator or vehicle. In this sense the HO may now act more as a monitor during execution of the commands, rather than strictly as an in-the-loop controller.

2. THE ARGOS TOOLKIT

The basic tool for our VTC system is the ARGOS™ (Augmented Reality through Graphical Overlays on Stereovideo) toolkit. As described for example in [Milgram et al, 1991; 1993; 1994a,b; 1995], ARGOS is a “Mixed Reality” display interface employing calibrated stereoscopic 3D graphical overlays on a remote stereoscopic video view of the real (unmodelled) world. A simple block diagram of the ARGOS system components is given in Fig. 5.

The major components of ARGOS technology are Stereoscopic Video and Calibrated Stereographics. These are discussed below, with emphasis on their applicability for teleoperation.

Stereoscopic Video

Most teleoperation systems use video cameras to provide feedback about the remote work-site. Although monoscopic video provides many important depth cues, such as linear perspective, interposition, relative size and shading, for tasks requiring finer spatial judgement these are often insufficient, and absence of binocular depth cues (for stereoscopic viewing) can impede the operator's task performance. Although a HO will normally acquire much knowledge about depth relationships at the remote site through practice [Kim et al, 1987], this is clearly the case principally for repetitive tasks [Drascic, 1991], which do not usually characterise teleoperations in unstructured environments. To circumvent this problem, stereo video has successfully been demonstrated to improve operator performance in many cases [e.g. Drascic & Grodski, 1993; Pepper et al, 1981]. Prior use of stereo video has also been found to reduce operator training times in using monoscopic cues [Drascic, 1991], and is much better for detecting slopes and gradients in the environment and for reducing visual noise [Merritt, 1988].


Fig. 5. Block diagram of ARGOS™ System for Augmented Reality through Graphic Overlays on Stereo- video. Computer generated graphic objects are superimposed onto stereoscopic video images, and viewed on a monitor based system.

Calibrated Stereographic Overlays

There are three primary purposes for overlaying graphics for teleoperation applications: 1) as a tool for "probing" the real remote environment visible on video, 2) for enhancing video images through real object overlays, thus compensating for image degradation due to occlusion of objects, poor video quality and bad lighting conditions, and 3) for introducing realistic looking but non-existent graphic objects so that they appear to be a part of the video scene. All three of these objectives can be accomplished only when objects in the scene are previously defined or when relevant data in the scene can be acquired on-line with sufficient precision through interactive mediation by the HO. In all cases, the real-world image is enhanced by the virtual graphic overlays, creating an augmented reality.

ARGOS Toolkit

The tools provided by ARGOS can be classified into "probing tools" and "enhancement tools". All of these tools have been designed both for improving the HO's comprehension of the remote environment and for interactive modeling of the remote world (see below).

• The most basic probing tool is the virtual pointer. This is a stereo-graphic cursor which can be positioned anywhere in the stereovideo scene [Drascic & Milgram, 1991]. When properly calibrated, the virtual pointer gives a direct readout of its corresponding {x,y,z} location in absolute real world units, and thus quantifies the 3D location of any object to which it is placed adjacent.

• The virtual tape measure is an extension of the virtual pointer, used for measuring distances between points in the remote stereo video scene. It is generated by clicking a start point with the virtual pointer and dragging a virtual line of calibrated length through the video image to a selected end point.

• Related to both of the above are virtual landmarks, which are graphical objects of known length, or known separation, superimposed on the video scene to enhance the HO's ability to judge absolute distances, and thus the absolute scale of the remote world. Research has shown that such simple graphic aids can be very effective for this purpose [Milgram & Krüger, 1992].

Virtual planes are generated by specifying three or more coplanar points with the virtual pointer. One important application of such planes is for restricting movement of simulated objects. For example, a graphical model of a (virtual) robot interactively placed on a real surface would not ordinarily have any knowledge about the planar constraints of that surface. A straightforward way of conveying such information to the computer would be through interactive modelling (as described below) by means of virtual planes.

Virtual objects, which are either interactively generated or premodelled according to particular geometric specifications, can be superimposed on stereo video at designated locations and at specified orientations to appear as if they are really present within the remote scene.

Virtual encapsulators are wireframe shapes created on the remote stereovideo scene to encapsulate real objects. This can be done approximately, as a tool for indicating an envelope of size, position and orientation of a real object in space, or more exactly, for highlighting the edges of an object. Virtual encapsulators require the same modelling, location and orientation data as do virtual objects.

Virtual trajectories are graphical indications of prescribed robot motions, added to the image of the real robot at a particular initial configuration, to specify the desired trajectory for the robot to follow. These can be used, for example, for path planning purposes, by placing trajectories into the video space and verifying plans for their accuracy in relation to the actual (unmodelled) worksite.

• A subclass of the virtual trajectory is the virtual tether, which was developed as a perceptual aid for manual telemanipulation tasks. A virtual line, or tether, is drawn between the end effector and its intended target. As the manipulator moves, it remains "tied" to the target through the tether. Research has shown that use of such a tether was able to improve accuracy in an experimental peg-in-hole task, relative to the case of stereo video alone with no tether [Ruffo & Milgram, 1992].

• Finally, a full stereographic 3D model of a remotely controlled robot, or a virtual manipulator, has been developed. By superimposing such a model of the robot at the remote site onto the real robot, the graphical model can be manipulated within the real (complex, unstructured, unmodelled) 3D work space. This concept is discussed in more detail below.

3. INTERACTIVE PARTIAL MODELLING

Referring to Fig. 4, we recall that the key requirement for allowing the HO to operate as more than just a continuous manual controller is the ability to acquire adequate quantitative information about the remote world to permit semi-automated machine execution to take place. In contrast to the significant efforts by others to develop intelligent machine vision systems for this purpose, our approach to this challenge has been to provide the HO with the means for interactively building up a sufficient partial model of the worksite. In other words, the intelligence in our approach remains with the human component of the system, while we endeavour to provide her with the tools for carrying out required data acquisition operations. The concept of interactive modelling is therefore proposed as a means to enable the HO to convey limited amounts of data about the video scene to the computer, and thereby to develop and refine a quantitative, but not necessarily complete, model of portions of the remote world. On the basis of this partial model, the spatial information necessary for defining targets, waypoints and trajectories, as well as boundaries and obstacles, can be communicated for subsequent semi-automated control of the telerobot.

The means for interactive modelling are provided by the ARGOS toolkit. Taking the virtual stereographic pointer as the basic measuring instrument, for example, a coordinate point in the stereovideo scene can be defined by moving the overlaid stereographic (virtual) pointer in stereovideo space to match the perceived 3D location of a selected object feature. In order for the ARGOS toolkit of pointers, tape measures, overlaid objects, encapsulators, etc. to be useful for this purpose, however, it is critical that a unique bidirectional one-to-one mapping of coordinate spaces between the virtual world and the remote world viewed through stereovideo be established, as illustrated in Fig. 6. In the ARGOS system registration of graphics and the real world is accomplished by means of a calibration object of known dimensions situated within the video image. For teleoperation applications the robot itself can serve as the calibration object, since it is in any case present at the remote scene and its geometric parameters are assumed to be well known. Whenever the stereo camera parameters change relative to the remote world coordinate system, however, the calibration must be done again (preferably on-line) to maintain accuracy.


Fig. 6: One-to-one mapping between graphic and real coordinate spaces

Following stereo calibration, the spatial location of any 3D point in the remote space can be found relative to the coordinate system of the remote world simply by pointing at it with the graphic cursor. Since this is done in stereo, these points can be specified by the operator in the real video scene directly. Initial studies have shown that subjects were in fact able accurately to align the position of a virtual graphic pointer relative to the positions of real video targets as well as they could when a real pointer was similarly manipulated in the real video scene [Drascic & Milgram, 1991]. By specifying multiple points in the real world 3D space, virtual objects such as cubes, planes, etc. can be created easily, with their corresponding positions and sizes in the real world known immediately. These overlaid images can then be interactively moved about to a desired location within the video image, to give the impression that they are actually at the remote site.

4. VIRTUAL TELEROBOTIC CONTROL (VTC) USING ARGOS

A schematic representation of the concept of Director / Agent virtual telerobotic control (VTC) is given in Fig. 7, where a generic barrier, representing some separation of distance, time, scale, or function, is shown between the remote system and the HO with her local control computer. The remote manipulator situated in the real unstructured, unmodelled world is sensed by video and reproduced locally for the HO. Using the ARGOS augmented reality toolkit, the HO is able interactively to build up a partial model of the remote world, on the basis of which commands intended for the real remote system can be formulated, rehearsed and ultimately transmitted.

It is important to point out that off-line graphical simulation of robot operations is a well-known technique in path planning. In particular, this allows specification and verification of the planned robot path without the expense of actual trials or the hazard of potential collisions in the real environment, prior to actual execution by the remote teleoperator. The essential difference between our VTC approach and other analogous approaches is that conventionally such off-line simulations are carried out for highly modelled environments, which of course has the distinct advantage of permitting visualisation of planned operations from any desired physical viewpoint. Our system on the other hand allows replication of the moves of the robot in the same visual environment as the real robot, superimposed on the real remote video scene, but only from the single viewpoint of the remote (stereo) camera system. The clear advantage of this superimposed modeling approach is that the effort of acquiring a detailed world model, if at all feasible (Fig.’s 2 and 3), is not necessary and the ability of the operator to visualise the simulated tasks is greatly enhanced by the fact that the visual background coincides with the actual remote worksite.

It is useful to distinguish between our system and other related efforts. One of these is the JPL advanced graphics interface for telerobotic servicing and inspection [Kim et al, 1993], which provides for analogous preview, prediction and on-line visualisation capabilities for both graphics and video, however not with stereoscopic displays. Another is the German ROTEX project [Brunner et al, 1993], which is quite sophisticated in simulation and world modelling capabilities, but presents the graphics and video on adjacent displays, rather than integrated within a single display. Cannon & Leiffer [1991] have also proposed a system for point-and-direct telerobotics; however, their system is based on a two camera view system, rather than an integrated stereo display. Finally, Browse & Little [1991] have developed a similar virtual control prototype, but using only monoscopic displays.

5. CONCLUSION

Virtual telerobotic control (VTC) has been introduced as a means of providing an operating environment suitable for achieving optimal allocation of tasks between the human and machine elements of a telerobotic system. The ARGOS augmented reality toolkit, based on computer generated stereographic images superimposed onto a stereoscopic video image of the remote real worksite, is used to address some of the problems associated with operating in unstructured environments in a cost effective manner, by allowing the human operator (HO) interactively to develop a partial model of the remote worksite. The result is that the HO is able to operate at an effective level of control which is substantially higher than conventional low level manual control.


Fig. 7. Summary of virtual telerobotic control system components.

6. ACKNOWLEDGEMENTS

This project has been supported by the Defence and Civil Institute of Environmental Medicine (DCIEM), the Manufacturing Research Corporation of Ontario (MRCO), and the Institute of Robotics and Intelligent Systems (IRIS) .

7. REFERENCES

Browse, R.A. and Little, S., "The effectiveness of real time graphic simulation in telerobotics", IEEE International Conference on Robotics and Automation, 1991.

Brunner, B., Hirzinger, G., Landzettel, K., and Heindl, J., "Multisensory shared autonomy and tele-sensor-programming – Key issues in the space robot technology experiment ROTEX", Proc. IEEE/RSJ Int'l Conf. on Intelligent Robots and Systems, Yokohama, 1993.

Burtnyk, N. and Greenspan, M.A., "Supervised autonomy – partitioning telerobotic responsibilities between human and machine", International Conf. on Intelligent Teleoperation, Nov, 1991.

Cannon, D. and Leifer, L.J., "Point and direct telerobotics", International Conference on Intelligent Teleoperation, Nov, 1991.

Drascic, D., “Skill acquisition and task performance in teleoperation using monoscopic and stereoscopic video remote viewing”, Proc. Human Factors Society 35th Annual Meeting, 1991.

Drascic, D. and Grodski, J.J., "Using stereoscopic video for defence teleoperation", SPIE Vol 1915 Stereoscopic Displays and Applications IV, 1993.

Drascic, D. and Milgram, P., "Positioning accuracy of a virtual stereographic pointer in a real stereoscopic video world". Proc. SPIE 1457, Stereoscopic Displays and Applications II, 1991.

Kim, W.S., Schenker, P.S., Bejczy, A.K., and Hayati, S., "Advanced graphics interfaces for telerobotic servicing and inspection", Proc. IEEE/RSJ Int'l Conf. on Intelligent Robots and Systems, Yokohama, 1993.

Kim, W.S., Tendick, F., and Stark, L.W., "Visual enhancements in pick and place tasks: Human operators controlling a simulated cylindrical manipulator", IEEE J. Robotics and Automation, Vol. RA-3, No. 5, Oct . 1987.

Merritt, J.O., "Often-overlooked advantages of 3-D displays", Proc. SPIE 902: Three Dimensional Imaging and Remote Sensing, 1988.

Milgram, P., Drascic, D., Grodski, J.J., Rastogi, A., Zhai, S. and Zhou, C., “Merging real and virtual worlds”, Proc. Imagina 95, Monte Carlo, Feb. 1995.

Milgram, P. and Kishino, F., “A taxonomy of mixed reality visual displays”, IEICE Trans. on Information Systems, special issue on Networked Reality, Dec. 1994.

Milgram, P. and Krüger, M, "Adaptation effects in stereo due to on-line changes in camera configuration", Proc. SPIE Vol 1669-13, Stereoscopic Displays and Applications III, 1992.

Milgram, P., Takemura, H., Utsumi, A., and Kishino, F., “Augmented Reality: A class of displays on the reality-virtuality continuum”, Proc. SPIE Telemanipulator and Telepresence Technologies, Vol 2351, Boston, Oct. 1994.

Milgram, P., Zhai, S., Drascic, D. and Grodski, J., "Applications of augmented reality for human-robot communication", Proc. IEEE/RSJ Int'l Conf. on Intelligent Robots & Systems (IROS), Yokohama, July, 1993.

Milgram, P., Drascic, D. and Grodski, J.J., “Enhancement of 3-D video displays by means of superimposed stereo-graphics”, Proc. Human Factors Society 35th Annual Meeting, San Francisco, Sept. 1991.

Pepper, R.L., Smith, D.C., and Cole, R.E., "Stereo improves operator performance under degraded viewing conditions", Optical Engineering, 20(4), July/Aug. 1981.

Rastogi, A., Milgram, P., Drascic, D., and Grodski, J.J., “Virtual telerobotic control”, Proc. DND Workshop on Advanced Technologies in Knowledge Based Systems and Robotics, Ottawa, Nov. 1993.

Ruffo, K. and Milgram, P., "Effect of stereographic + stereovideo "tether" enhancement for a peg-in-hole task", Proc. IEEE Annual Conf. on Systems, Man & Cybernetics, 1992.

Schebor, F. S. and Turney, J.L., "Realistic and consistent telerobotic simulation", IEEE International Conference on Robotics and Automation 1991, 889-894.

Sheridan, T.B., Telerobotics, Automation and Human Supervisory Control, MIT Press, 1992.

Spofford, J.R., Garcia, K.D., and Gatrell, L.B., "Machine Vision Augmented Displays for Teleoperation", IEEE International Conference on Systems, Man and Cybernetics, 1991, 19-23.

Tachi, S., Arai, H., and Maeda, T., "Tele-existence master slave system for remote manipulation (II)", Proceedings 29th Conference on Decision and Control, 1990.

Zhai, S. and Milgram, P., "Human-robot synergism and virtual telerobotic control", Proc. Annual Meeting of Human Factors Association of Canada, Oct. 1992.

Zhai, S. and Milgram, P., "A telerobotic virtual control system", Proc. SPIE 1612, Cooperative Intelligent Robotics in Space II, Boston, 1991.