Gaze Contingent Depth Recovery and Motion Stabilisation

Một phần của tài liệu Medical imaging and augmented reality (Trang 327 - 333)

In order to demonstrate the practical value of the proposed concept, two experiments were conducted; one involves the use of vergence for gaze contingent depth recovery for soft tissue. The other is used for adaptively changing the position of the camera to cancel out cyclic motion of a tissue in such a way that the foveal field of view is stabilized. For depth recovery, both real scenes captured by live stereo camera and computer generated surfaces are used. Five subjects where asked to observe the two images by following a suggested fixation path. Their fixation points where acquired from the eye tracker while performing the experiment and the acquired depth coordinates where recorded.

For motion stabilization, we demonstrate how gaze contingency can be used to stabilize the apparent position of a moving target by accordingly controlling the compensatory movement of the stereo camera. An OpenGL stereo scene is set up in perspective projection and a target sphere is eccentrically oscillated in the Z-axis

Gaze Contingent Depth Recovery and Motion Stabilisation 315 (depth) by keeping its X-axis and Y-axis stable. The virtual target is oscillated by transformation of the model matrix. Any required movement of the camera is respectively simulated by appropriate transformation of the view matrix. Free sinusoidal oscillation was used in this experiment. To regulate the movement of the camera, a closed feedback loop as shown in Fig. 2 was used. As the user fixates on the target, its fixation amplitude is determined and subtracted from the preset amplitude, which corresponds to the reference distance between the camera and the target that we attempt to maintain constant. The positional shift of the virtual camera is dependent on the error signal fed into the camera controller. An error of zero leaves the position of the camera unaffected while a negative or positive error shifts the camera closer or further away from the target. The response of the camera controller is regulated by Proportional, Integral and Derivative (PID) gains [4]. For this experiment the required reference distance was set to a value of +1. This means that the camera has to be kept at a constant distance of one depth units on top of the target, which in this experiment is set to sinusoidal oscillation with a frequency of 0.1 Hz.

Fig. 2. By implementing a closed feedback loop, the robotic controller will shift the virtual camera in an attempt to maintain the fixation amplitude error signal down to zero

3 Results

Fig. 3 illustrates the depths recovered by the five subjects studied. During the experiment, they were asked to scan with their eyes along the Y-axis of the object.

During the experiment, no visual markers were introduced and they were relatively free to select and fixate on image features of their preference Fig. 3a shows a comparative plot of the surface depths recovered from these subjects. It is evident that a relatively close correlation has been achieved, demonstrating the feasibility of veridical reconstruction of the real depth.

The same subjects were also asked to perform a similar task with synthetic images.

This is necessary since in the master control console of a robotic system both live video and synthetically generated images are often present. Thus, it is important to establish that similar depth reconstruction behaviour can be achieved. Similarly to the previous experiment, the subjects were asked to follow a predefined path by fixating at image features of their preference. The only restriction posed was that they had to fixate at certain “landmark” areas, which correspond to either valleys or hills of

316 G.P. Mylonas, A. Darzi, and G.-Z. Yang

Fig. 3. (a) The comparative results of the reconstructed depths from the fixation paths of five subjects along with the surface used (b). The subjects followed a loosely suggested vertical path starting from the bottom of the surface moving towards the top.

Fig. 4. (a) Graphical comparison of each subject’s recovered depths against the actual depth of the suggested path, along with the virtual surface depicted on the right (b).

known depth. Fig. 4 presents all the depths recovered from these subjects. The known depth is presented as a thick line as reference.

For motion stabilization, the subjects were instructed to keep fixating on the moving target, which will become stationary after stabilization. After a short period of adaptation, all the subjects were able to stabilize the target, and Fig. 5 demonstrates the constant distance between the target and the camera that the observers were able to maintain. It is evident that the gaze contingent camera closely compensates for the oscillation of the target. To allow for a more quantitative analysis, Table 1 illustrates the regression ratios of the target and motion compensated camera position after subtracting out the constant distance maintained. The mean and standard deviation of the regression ratio achieved for this study group is 0.103 and 0.0912 respectively.

Gaze Contingent Depth Recovery and Motion Stabilisation 317

Fig. 5. (a) Gaze contingent motion compensation over a period, performed by 5 subjects. The shift of the subject lines along the depth axis corresponds to the required reference distance of the gaze-controlled camera from the target. (b) Respective linear regression plot of a subject

4 Discussion and Conclusions

In conclusion, we have demonstrated two important features related to gaze contingent robotic control. Deploying robots around and within the human body, particularly for robotic surgery presents a number of unique and challenging problems that arise from the complex and often unpredictable environments that characterise the human anatomy. Existing master-slave based robots such as the daVinci system, which embodies the movements of trained minimal access surgeons through motion scaling and compensation, are gaining clinical significance. Under the conventional dichotomy of autonomous and manipulator technologies in robotics, intelligence of the robot is typically pre-acquired through high-level abstraction and environment

318 G.P. Mylonas, A. Darzi, and G.-Z. Yang

modelling. For systems that require robotic vision, this is known to create major difficulties. The ethical and legal barriers imposed on interventional surgical robots give rise to the need of a tightly integrated perceptual docking between the operator and the robot, where interaction in response to sensing is firmly under the command of the operating surgeon. The study presented here is a first step towards this goal and from our knowledge this is the first of its kind that have been demonstrated in normal subjects for both real and synthetic scenes.

It is interesting to note that for two of the subjects studied (2 and 5) for motion compensation, near perfect compensation was achieved. These particular subjects had the opportunity to spend more time performing the experiment over several sessions, suggesting experience of the system plays a certain role in the ability to stabilizing the motion of an oscillating object. It should be noted that there are a number of other issues that need to be considered for the future integration of gaze contingency to the robotic control such as dynamics of vergence [5][6] and subject/scene specific behaviour of the eye [7][8]. Other issues related to monocular preference [9], visual fatigue [10] and spatial errors that arise when portraying 3D space on a 2D window [11] will also need to be considered.

It is worth noting that for the motion compensation study it was assumed that the target oscillates along the visual axis of the camera. In a real situation, the mode of oscillation of a fixated tissue could be in any direction. Realigning the visual axis of the camera with the oscillation axis could be achieved by also taking into consideration the eye-tracking acquired oscillation components along the X-axis and Y-axis. This issue needs also further investigation.

Acknowledgements

The authors would like to thank Paramate Horkaew and Mohamed ElHelw for their valuable help.

References

1.

2.

3.

4.

5.

6.

7.

Mon-Williams, M., Tresilian, JR, Roberts, A.: Vergence provides veridical depth perception from horizontal retinal image disparities. Exp Brain Res 133 (2000) 407-413 Yang, G.-Z., Dempere-Marco, L., Hu, X.-P., Rowe,A.: Visual search: psychophysical models and practical applications. Image and Vision Computing 20 (2002) 291-305 Bookstein, F.L.: Principal Warps: Thin Plate Splines and The Decomposition of Deformations. IEEE Trans Pattern Anal. Mach. Intell 11 (1989 June)

Dorf, R.C., Bishop, R.H.: Modern Control Systems, 9th Edition, Prentice Hall 2001.

Howard, IP, Allison, RS, Zacher, JE.: The dynamics of vertical vergence. Exp Brain Res.

116(1) (1997 Aug) 153-9

Kawata, H., Ohtsuka, K.: Dynamic asymmetries in convergence eye movements under natural viewing conditions. Jpn J Ophthalmol. 45(5) (2001 Sep-Oct) 437-44

Stork, S., Neggers, S.F.W., Müsseler, J.: Intentionally-evoked modulations of smooth pursuit eye movements. Human Movement Science, 21 (2002) 335-348

Gaze Contingent Depth Recovery and Motion Stabilisation 319 8.

9.

10.

11.

Rottach, K.G., Zivotofsky, A.Z., Das, V.E., Averbuch-Heller, L., Discenna, A.O., Poonyathalang, A., Leigh, RJ.: Comparison of horizontal, vertical and diagonal smooth pursuit eye movements in normal human subjects. Vision Res. 36(14) (1996 Jul) 2189-95 van Leeuwen, A.F., Collewijn, H., Erkelens, C: Dymanics of horizontal vergence movements: interaction with horizontal and vertical saccades. Vision Res. 38(24) (1998 Dec) 3943-54

Takeda, T., Hashimoto, K., Hiruma, N., Fukui, Y.: Characteristics of accommodation toward apparent depth. Vision Res. 39(12) (1999 Jun) 2087-97

Wann, J.P., Rushton, S., Mon-Williams, M.: Natural problems for stereoscopic depth perception in virtual environments. Vision Res. 35(19) (1995 Oct) 2731-6

Freehand Cocalibration of Optical and Electromagnetic Trackers for Navigated

Bronchoscopy

Adrian J. Chung, Philip J. Edwards, Fani Deligianni, and Guang-Zhong Yang Royal Society/Wolfson Foundation Medical Image Computing Laboratory,

Imperial College, London, UK

{a.chung,eddie.edwards,g.yang}@imperial.ac.uk

Abstract. Recent technical advances in electromagnetic (EM) tracking have facilitated the use of EM sensors in surgical interventions. Due to the susceptibility to distortions of the EM field when placed in close prox- imity to metallic objects, they require calibration in situ to maintain an acceptable degree of accuracy. In this paper, a freehand method is pre- sented for calibrating electromagnetic position sensors by mapping the coordinate measurements to those from optical trackers. Unlike previous techniques, the proposed method allows for free movement of the cali- bration object, permitting continuity and interdependence between positional and angular corrections. The proposed method involves calcu-

lation of a mapping from to with radial basis

function interpolation based on a modified distance metric. The system provides efficient distortion correction of the EM field, and is applicable to clinical situations where a rapid calibration of EM tracking is required.

1 Introduction

Một phần của tài liệu Medical imaging and augmented reality (Trang 327 - 333)

Tải bản đầy đủ (PDF)

(391 trang)