• No results found

Experiments with the navigation system

B. Implemented modules of the navigation system

B.4 Experiments with the navigation system

Figure B.13 shows images of the moving robot. The robot has to reach a goal on a given path in the object coordinate system. This path is first translated into robot movement commands. Since the movements of the robot are not exact, the Navigator gives the pilot (after some movements) the command to interrupt the current path, turn to the landmark, take images, return to the path and continue the motion. Then the Navigator gives a pose request to the TrackPose module (to process the actually taken images of the landmark) and uses its results to correct the driven path. Figure B.14 shows screen shots of the vision module, the result of the image feature extraction and the pose estimation before and after the tracking process. These results are used to update and correct the real position of the robot as shown in figure B.15. In the left there is a comparison of the planned path with the really

Fig. B.14: The image processing steps during the navigation: The left col- umn shows images with extracted Hough lines. The middle col- umn shows the projected object model before matching and the right column shows the projected transformed object model after matching.

driven path and its increasing odometric errors. The path itself consists of a square, which is driven several times. The right image of figure B.15 shows the accumulated angle differences between the planned path to the really driven path, and between the really driven path and the visually corrected path. The accumulated odometric data result in increasing error and, with increasing time, will lead to non-tolerable results. Instead, the visual error correction leads to a nearby constant error function and, thus, to much better and more stable results. The level of the corrected error function is dependent on both the calibration quality and the match error.

−20 0 20 40 60 80 100 120 140 160 −20 0 20 40 60 80 100 120 140 160 real path planned path 0 5 10 15 20 25 30 35 0 2 4 6 8 10 12 14 steps degree

real heading error heading error with correction

Fig. B.15: Left: Planned path overlaid with the really driven path. Right: Odometric (angle) error vs. visually corrected (angle) error

−200 −150 −100 −50 0 50 −200 −150 −100 −50 0 50 100 150 200 250 300 cm cm

Fig. B.16: Navigation path: The white circles show the localization during the navigation. The black circle indicates an obstacle, which is avoided.

Figure B.16 shows the path during another navigation sequence. The white circles in figure B.16 indicate the positions of self-localization: The robot interrupts the path and turns toward the landmark. Then the robot continues with the movements. The black circle in figure B.16 indicates an

Fig. B.17: Hallway navigation example: The left column shows images with extracted Hough lines. The middle column shows the projected object model before matching and the right column shows the projected transformed object model after matching.

obstacle. The collision server overrides the movement commands and drives around the obstacle.

It is not only possible to use compact landmarks for self localization of the robot. In figure B.17 a hallway is used as object model. This means, it is also possible to apply the algorithm on CAD maps of office environments. These experiments show the performance of the navigation system, mainly implemented by D. Grest. More details about the system can be found in his diploma thesis [68].

[1] Ackermann M. Akkumulieren von Objektrepr¨asentationen im Wahrnehmungs–Handlungs Zyklus. Diplomarbeit, Lehrstuhl f¨ur Kog- nitive Systeme der Christian-Albrechts-Universit¨at zu Kiel, 2000.

[2] Andersson B. D. O. and Moore J.B. Optimal Filtering. Prentice Hall, Englewood Cliffs, N. J., 1979.

[3] Araujo H., Carceroni R.L. and Brown C.M. A Fully Projective Formu- lation for Lowe’s Tracking Algorithm. Technical Report 641, University of Rochester, 1996.

[4] Arbter K. Affine-invariant fourier descriptors. In From Pixels to Fea- tures. Simon J.C. (Editor), Elsevier Science Publishers, pp. 153-164, 1989.

[5] Arbter K. Affininvariante Fourierdeskriptoren ebener Kurven. Technis- che Universit¨at Hamburg-Harburg, Ph.D. Thesis, 1990.

[6] Arbter K., Snyder W.E., Burkhardt H. and Hirzinger G. Application of affine-invariant fourier descriptors to recognition of 3-D objects. IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), Vol. 12, No. 7, pp. 640-647, 1990.

[7] Arbter K. and Burkhardt H. Ein Fourier-Verfahren zur Bestimmung von Merkmalen und Sch¨atzung der Lageparameter ebener Raumkurven. Informationstechnik, Vol. 33, No. 1, pp.19-26, 1991.

[8] Arkin R.C. Motor schema-based mobile robot navigation. International Journal of Robotics Research, Vol. 8, No. 4, pp. 92–112, 1989.

[9] Arkin R.C. Behaviour-Based Robotics. MIT Press, Cambridge, Mas- sachusetts, 1998.

[10] Ayache N. and Faugeras O. Building, registrating and fusing noisy visual maps. International Journal of Robotics Research, Vol. 7, No. 6, pp. 45-65, 1988.

[11] Bar-Itzhack I.Y. and Oshman Y. Attitude determination from vector observations: quaternion estimation. IEEE Trans. Aerospace and Elec- tronic Systems, AES-21, pp. 128-136, 1985.

[12] Barron J.L. and Fleet D.J. and Beauchemin S.S. and Burkitt, T.A. Per- formance of optical flow techniques. International Journal of Computer Vision (IJCV), Vol. 12, No. 1, pp. 43–77, 1994.

[13] Bayro-Corrochano E. The geometry and algebra of kinematics. In [164], pp. 457-472, 2001.

[14] Bayro-Corrochano E., Daniilidis K. and Sommer G. Motor algebra for 3D kinematics: The case of the hand-eye calibration. Journal of Math- ematical Imaging and Vision (JMIV), Vol. 13, pp. 79-100, 2000.

[15] Bayro-Corrochano E. and K¨ahler D. Kinematics of robot manipulators in the motor algebra In [164], pp. 473-490, 2001.

[16] Bayro-Corrochano E. and Rosenhahn B. A geometric approach for the analysis and computation of the intrinsic camera parameters. Pattern Recognition, No. 35, pp. 169-186, 2002.

[17] Bayro-Corrochanno E., Daniilidis K. and Sommer G. Hand-eye calibra- tion in terms of motions of lines using geometric algebra. In 10th Scandi- navian Conference on Image Analysis, Vol. 1, pp. 397-404 Lappeenranta, 1997.

[18] Beveridge J.R. Local Search Algorithms for Geometric Object Recog- nition: Optimal Correspondence and Pose. Ph.D. Thesis, Computer Science Department, University of Massachusetts, Amherst, 1993. Avail- able as Technical Report CS 93–5.

[19] Besl P.J. The free-form surface matching problem. Machine Vision for Three-Dimensional Scenes, Freemann H. (Editor), pp. 25-71, Academic Press, San Diego, 1990.

[20] Beutelspacher A. Lineare Algebra. Eine Einf¨uhrung in die Wissenschaft der Vektoren, Abbildungen und Matrizen. Vieweg Lehrbuch Mathematik, 1994.

Related documents