Augmented indoor modeling for navigation support for the blind

Augmented Indoor Modeling for Navigation
Support for the Blind
Andreas Hub Joachim Diepstraten Thomas Ertl
Visualization and Interactive Systems Group
University of Stuttgart, Universitätsstraße 38
70569 Stuttgart, Germany


Abstract - In this paper we present a concept for a wide-ranging indoor navigation support for the blind and
people with impaired vision. Parts of this work were realized within a new prototype of an indoor navigation
and object identification system for the blind. With the previous orientation assistant it is possible for blind
persons to orientate themselves and to detect objects within modeled indoor environments. By pressing keys, the
user’s inquiries concerning their environment are acoustically answered through a text-to-speech engine. The
previous system’s limitation was that it was necessary to hit an object precisely using a picking ray within a 3D
model in order to allow proper object identification. Our new prototype now includes the option to receive
augmented navigation hints automatically just by walking in virtual corresponding navigation areas.

Keywords: Indoor navigation, blind users, impaired vision, mobile computing

Blind people want to know where they are and where they can go to. They want to be informed about persons and objects in their environment. Any object and any object feature may be of importance. Along the path to a certain destination, blind people want exact information about appropriate paths, dangers, distances and critical situations. In 2004 a new type of indoor orientation assistant for the blind was presented by the authors [4]. This orientation assistant allows blind users to identify objects and to determine object features close to real time demands. The orientation assistant consists of two parts – a sensor module and a small portable computer that can be carried in a small backpack. The portable computer has access to 3D environment models which are saved locally or can be obtained from the Nexus platform [3] – representing a platform of several servers. The sensor module itself consists of direction sensors and a stereo camera. The orientation assistant works as follows: The location of the user respectively the orientation assistant within buildings can be determined over a conventional WiFi system via the differences in the signal strength transmitted by the access points. Using the sensor module the blind user can point at objects. The direction of the sensor module determines the direction of a virtual picking ray within the corresponding 3D model. Names of typical indoor equipment like furniture or building parts are saved within the 3D models. The sensor module also includes a keyboard. When a key is pressed by the blind user, the name of the first object that is hit by the virtual ray is announced over a text-to-speech engine. The stereo camera can be used to measure the color of objects and to estimate their distances and sizes. The camera sensor also can be used to determine differences between model information and reality. Optionally the sensor module can be attached to a cane. This allows to use the sensor module also with one hand but retains the safety feeling that the cane provides. During the last year the hardware ergonomics of the module has been optimized and the software has been extended. In this paper we present the second generation of our electronic assistant which now includes an advanced indoor navigation support for the blind and visually impaired. Over the past few decades some research has been dedicated to navigation assistance for the blind or visual impaired persons. Many of these navigation assistances can be categorized into basic obstacle avoidance systems for example like the NavBelt from Shoval et al. [10] that produces a 120-degree wide view ahead from the users current location. This information is translated into a stereophonic acoustical sound that allows the user to notice if a certain direction is blocked. Similar to this approach the Haptica Corporation developed Guido [2] a robotic walking frame equipped with a sonar sensor. Recent and very promising approaches rely on the usage of robotic assistances like the work from Kulyukin et al. [5]. Although they work very well in new, unknown indoor environments they still rely on certain provided infrastructure for example RFID tags as well as they are anything else than inconspicuous. For outdoor navigation several systems have been proposed [6, 7, 9, 12] or are commercially available [14]. Although they share common characteristics with indoor navigation systems, we will not go into further detail regarding these systems, as they often rely on technologies that are not available in indoor environments - for example by using GPS for tracking. The Noppa system [8] also briefly mentions the possibility for usage as indoor navigation system, but unfortunately these capabilities are not further detailed. Other research also deals with indoor navigation for the blind that is more challenging with respect to orienting a user, because a generic solution for this problem does not yet exist. Sonnenblick [11] evaluated and implemented a large-scale indoor navigation system for blind individuals at the Jerusalem Center for Multi-Handicapped Blind Children. It relies on specially installed infrared beacons for orienting blind users and on a custom-built end-user device. In contrast, the “Chatty Environment” by Coroama [1] is focusing on technology that does not require special hardware, but rather equipment that is already generally available. This system uses standard 802.11 WiFi for positioning and PDAs as end-user devices, which increases the chances for a widespread use. In a later work they even extend this system to special routing algorithms focused to the needs of the blind [13]. However, their system does not really address common problems with WiFi positioning and its rather coarse precision. A new plastic housing of the sensor module was developed which allows an easy and ergonomic handling of the sensor module with or without the cane (see Figure 1). It was possible to reduce significantly the size and weight of the module. Meanwhile, the sensor module includes acceleration sensors, inertial sensors and magnetic sensors besides the previous stereo camera and keyboard. The information gained from several sensors can be used to optimize the location measurement that is currently determined by using the conventional WiFi system of our institute. Figure 1: Our blind colleague during the first hardware usability test of the new navigation assistant. The sensor
module can be clicked to the cane and can be handled with one hand (a). The system allows the identification objects
by pointing at them using the sensor module (b) or can inform the blind user about the present location and
distribute navigation advices by walking in corresponding navigation areas.

3.2 Automatic navigation support by virtual navigation areas As described above, the basis of our approach are 3D environment models. These models include static objects such as furniture and objects of the building features such as doors and windows including their handles. The blind user can identify these objects just by pointing at them with the sensor module. The disadvantage of the previous system was that eventually a door or a hallway intersection might be missed in the case of unsystematic scanning of rooms or hallways. Another problem is that the determined location received from the WiFi system contains a signal dependent uncertainty. Possible reasons are changing weather conditions, the near presence of several persons or other conditions influencing electromagnetic fields. This urged us to introduce the concept of virtual navigation areas. We added wide rectangles on the floor within the 3D model of our computer science building where navigation information is useful, for example in front of entries, office doors and hallway intersections. One example of the first floor is shown in Figure 2. When the user enters the corresponding real areas of these virtual navigation areas, spoken information can be given with the text-to-speech engine regarding the kind of navigation area and direction advice. Additionally augmented information can be transmitted about relations between door numbers and the purpose of the room behind the door. Figure 2: Isometric map of the first floor of the computer science building showing the most important navigation
areas in the middle of intersection points (black) and in front of the main entrances, stairs, lifts, some of the doors
and emergency exits (gray).

Furthermore, location information on stairs, lifts, emergency exits and sanitary facilities can
be given. In critical situations, as in front of a stairway, it is possible to give appropriate
support including information concerning the existence of banisters or landings and the
number of steps (Figure 3).
„Stair with seven stepsupstairs. Banisters tothe left and right side.
Turn left after thelanding.”
Figure 3: Virtual navigation areas nearby a stair. When walking into the corresponding area, the user gets
information on the direction of the stair, the number of steps and the existence of banisters and landings.

We have developed the prototype and the basic concept for a wide-ranging indoor navigation assistant for blind and visually impaired people. The hardware of the sensor module was optimized according to suggestions of blind test subjects, leading to an ergonomic handling of the module. The complete system, including software, allows the identification of modeled objects close to real time demands and the reception of object features saved in a model or via a connected database. The concept of navigation areas allows blind users of the system to obtain navigation advice and warnings in front of dangerous situations. The combination of local sensor information, environment models and the text-to-speech engine offers plenty of options to inform blind users. It allows to react flexible also in difficult situations. Even if one sensor produces misleading information as it can happen with the magnetic sensors near to strong magnetic fields or for the stereo camera in an absolutely dark environment there is still the opportunity to use just the model information. On the other hand if there is no model information due to incompleteness or changes it still remains the possibility to use only the local sensors for navigation, e.g. the compass. In general, it is possible to give any advice about real and virtual modeled objects to blind persons using our system. The problem therewith is that in general, blind persons do not like to have any information, but rather suitable information. Anyone that has accompanied a blind person knows, it is not a simple task to only give appropriate information and navigation advice to blind persons. This is why the challenge is to find out what blind people really want to know in relation to the specific situation, which is often not what people with normal sight think blind people should know. Another question is what is possible from a technical point of view that blind people can know about their environments. What we have seen so far is that only usability tests are suitable to find out what blind people really want to know. Therefore, in the near future a strong focus will be to carry out these tests. The model of the computer science building can be used for virtual journeys of the blind within this model. These navigation simulations can be used for the optimization of software ergonomics and to focus on the efficient processing of the blind user’s inquiries. Within the next software versions, we will also take into account that the blind want to include own comments to certain locations and to create navigation areas on their own. In the distant future we will extend our system to outdoor environments. This requires the integration of GPS sensors and the development of handover concepts between the different positioning systems. Thereby, special attention has to be taken on the weight and ergonomics that are always the most important criterion in obtaining acceptance by blind users. During the development phase it became clear that it is a long way to develop a small navigation device for the blind that can be used discreetly. The main problem is that such a device needs to have a part of the perception power of the human cognition and simultaneously it must have access to extended 3D models with augmented information on a large number of objects and their features. However, we think that with our new prototype and the concept of navigation areas, we are now one step closer to our goal of a safe and independent orientation and navigation for the blind even in previously unknown environments. This project is funded by the Deutsche Forschungsgemeinschaft within the Center of Excellence 627 “Spatial World Models for Mobile Context-Aware Applications”. We also would like to thank our colleague Alfred Werner for his helpful input. Coroama, V. “The Chatty Environment - A World Explorer for the Visually Impaired”. Adjunct Proceedings of Ubicomp, 2003. Haptica Corporation Guido. http://www.haptica.com. [3] http://www.nexus.uni-stuttgart.de/index.en.html Hub, A., Diepstraten, J., Ertl, T. “Design and Development of an Indoor Navigation and Object Identification System for the Blind”. Proceedings of the ACM SIGACCESS conference on Computers and accessibility, Atlanta, GA, USA, Designing for accessibility, 147-152, 2004. Kulykukin, V., Gharpure C., DeGraw N. “Human-Robot Interaction in a Robotic Guide for the Visually Impaired”. AAAI Spring Symposium, 158-164, 2004. Loomis, J., Gooledge, R. G., Klatzky, R. L. “Navigation System for the Blind: Auditory Display Modes and Guidance”. Presence Vol.7, No.2, 192-203, 1998. Luo, A., Zhang X.-F., Tao W., Burkhardt H. “Recognition of Artificial 3-D Landmarks from Depth and Color – a first Prototype of Electronic Glasses for Blind People”. Proceedings of the 11th Scandinavian Conf. on Image Analysis (SCIA’99), 1999. Navigation and Guidance for the Blind. http://www.vtt.fi/tuo/53/projektit/noppa/noppaeng.htm Ross D. A., Blasch, B. B. “Wearable Interfaces for Orientation and Wayfinding”. Proceedings of the ACM ASSETS’00, 193-200, 2000. [10] Shoval, S., Borenstein, J., Koren, Y. “Mobile Robot Obstacle Avoidance in a Computerized Travel for the Blind”. IEEE International Conference on Robotics and Automation, 1994. [11] Sonnenblick, Y. “An Indoor Navigation System for Blind Individuals”. Proceedings of the 13th Annual Conference on Technology and Persons with Disabilities, 1998. [12] Strothotte, T., Petrie H., Johnson, V., Reichert L. “MoBIC: User Needs and Premlimary Design for a Mobility Aid for Blind and Elderly Travellers”. Proceedings of the 2nd Tide congress, 348-351, 1995. [13] Tissot, N. “Indoor navigation for visually impaired people - The navigation layer”. ETH Zürich Technical Report, 2003. http://www.mics.ch/SumIntU03/NTissot.pdf [14] VisuAide Inc., VisuAide Trekker. http://www.visuade/gpssol.html

Source: http://www.visus.uni-stuttgart.de/uploads/tx_vispublications/cpsn05-hde.pdf

Microsoft word - favorite chavurah tips.doc

1. Pick a Representative. Pick one or two people who will be the representatives of your Chavurah so theTemple or others know whom to contact for information. These people do nothave any more “power” than anyone else in your Chavurah, but are willing tocommit to the work of communications for the group. These roles should changefrom year to year so that everyone has the opportunity to serve as

Microsoft word - urodynamic testing

Keil Urogynecology 4500 E. 9th Ave. Ste. 420 Denver, CO 80220 Urodynamic Appointment: _________________________ Special Instructions: 1. Please come to our office with a full bladder. 2. If you have had a knee replacement, have mitral valve prolapse, or must take antibiotics prior to any dental or medical procedure, please take these antibiotics prior to your testing. Please notif

© 2010-2014 Pdf Medical Search