This invention relates to a hands-free wearable electronic traveling aid (ETA) system for blind and visually impaired (BVI) people's real-time indoor, guided navigation. Specifically, the invention discloses a system comprising multi-sensory inputs and machine learning processes together with crowd-assisted interfaces for navigation routing and for solving problematic situations during navigation of a BVI user.
A survey of blind experts has shown that after outdoor navigation, the second most important ETA feature for BVI persons is indoor navigation and orientation, for example, in public institutions, supermarkets, office buildings, homes, etc [1]. BVI persons need ETA for orientation and navigation in unfamiliar indoor environments with embedded features for the detection and recognition of obstacles (not only on the ground but also at eye-level) and desired destinations such as rooms, staircases, elevators, doors, and exits. To-date, BVI indoor navigation in unknown environments is still the most critical task for developers in this area due to a weak Global Positioning System (GPS) signal indoors [2] and costly pre-arranged indoor infrastructural installations (such as WI-Fl routers, beamers, RFID tags, 5G signals, etc.). Thus, some other special techniques or technologies are needed [3].
Solutions tailored to function in very restrictive settings, tests lacking robustness, and the limited involvement of end-users were emphasized as major limitations of the existing ETA research initiatives [4,5]. A tradeoff between the accuracy and costs of developing and deploying an indoor navigation solution was highlighted as a limiting factor after a thorough review of various technologies. Wi-Fi was pointed out as the most economically feasible alternative as long as such infrastructure is properly installed, and the users can tolerate lower accuracy [6]. In a semi-structured survey of BVI experts [1, , a basic understanding of users” expectations and requirements for indoor ETA solutions was presented and enabled the identification of some new developments in the field.
A wide range of general-purpose social networks, web 2.0 media apps, and other smart ICT (information and communication technology) tools have been developed to improve people's daily tasks, including navigation and orientation. Although they are not designed to meet the specialized requirements of BVI people, some features make them useful. For instance, text (and image) to voice, tactile feedback, and other additional enabling software and hardware solutions are helpful for this matter. However, the complexity and abundance of features pose a significant challenge for BVI persons. According to Raufi et al. [8], the volumes of information together with data from social networks confuse BVI users. In this way, Web 2.0 social networks do not guarantee specialized digital content accessibility for BVI users [9]. Some more focused approaches are in demand.
BVI people frequently use apps specifically designed for them to accomplish daily activities. However, N. Griffin-Shirley et al. emphasizes that persons with visual impairments would like to see both improvements in existing apps and new apps [10].
Several, currently available navigation apps are primarily based on pre-developed navigational information but do not provide real-life support, experience-centric user approaches, and participatory Web 2.0 social networking [1, 11, 12]. Other real-life social apps enable access to a network of sighted users and company representatives who are ready to provide real-time visual assistance for the BVI tasks at hand [13]. However, these apps are not adapted to the specific BVI needs in the indoor routes while navigating, orientating, getting lost, etc.
To date, there are no publications or patents, which describe a hands-free BVI indoor navigation approach with crowd-sourced navigational routes, which provides tactile and audio information to the BVI user and which uses facial EMG signals as a source for a user-controlled instructive interface. In the present invention, these issues are addressed.
The present invention integrally and innovatively deals with the following main technical problems that are known in the field of BVI ETA navigation and orientation applications indoors:
Above listed technical problems are addressed using an integrated ETA system approach, where proposed innovative hardware components and software applications work in a coordinated manner.
The present system is a complex technology of innovatively adapted hardware devices such as a 3D-ToF-IR camera, RGB camera, specially designed tactile display with EMG sensors, bone-conducting earphones, controller, IMU, GPS, light detector, and compass sensors. GSM communication can be implemented in a stand-alone device or smartphone that can work as an intermediate processing device. Passive sensors passively collect environmental data, whereas an active sensor like a 3D-ToF-IR camera emits IR light to estimate distances to the objects, see principal scheme in Mg. 1. Multi-sensory data is used to (i) find needed objects, (ii) locate obstacles, and (iii) infer BVI users' location in an indoor environment to enable navigation. The devices and sensors observe the environment in real-time and send data via the controller to the web cloud server wherein a machine learning processor is configured for feature extraction and object recognition, and a web cloud database stores all data.
From the point of view of a BVI end-user, this invention is distinguished among other related wearable indoor navigational ETA novelties because of a) intelligent user interface based on unique tactile display and audio instructions, b) hands-free programmable control interface using EMG, c) comfortable user-orientated headband design, d) machine learning-based real-time guidance, e) web-crowd assistance while mapping indoor navigational routes and solving problematic situations in real time.
For efficient indoor navigational performance, the presented ETA system is used in three consequently interconnected modalities:
Hence, in the ETA system's first modality, buildings' indoor objects and routes are practically explored and recorded by sighted users using the present ETA (electronic traveling aid) system. The sighted users go through the indoor routes, comment on objects, and mark key guidance points while wearing a hand-free device. In this way, sighted users mark indoor landscapes, map navigational directions, and make comments using the system's web crowd-assisted interface. Input data from indoor routes is processed in the web cloud server using a machine learning algorithm and collected in the web cloud database. The best statistical options for successful navigation are estimated periodically in the web cloud DB using deep neural networks or other artificial intelligence-based methods. In this way, BVI users can choose the fastest, shortest, stair-free, most used, best rated, most recent, or other route options. Route updates are continually sent from the sighted users and BVIs to the web cloud server. Such assistance works through social networking when relatives, neighbors, friends, and other people voluntarily and periodically use the ETA system to record indoor routes most important for BVI. Therefore, even various ever-changing indoor situations like renovations, furniture movements, closed doors, and the like can be recorded and updated continually through social networking. We consider such a social networking approach as an essential innovation method in the field of ETA applications for the BVI.
In the ETA system's second modality. BVI persons can choose a building and the desired indoor destination from the web cloud DB. Based on the user's preferences, the best route is suggested (fastest, shortest, stair-free, most used, best rated, most recent, or other options) based on the analyzed, semantically enriched, interpreted, and statistically validated indoor routes using information gathered in the ETA system's first modality. In this way, BVI can use navigational instructions to (i) get acquainted with the chosen route and (ii) use the instructions while orientating and navigating indoors. Machine learning and robot navigation approaches are innovatively adapted for this task. Robot navigation means a robot's ability to determine its position in its frame of reference and then plan a path towards some goal location. Following modern computer vision (visual odometry) based on autonomous robot navigation techniques (representations of the environment, sensing models, and localization algorithms), a similar visual odometry-based approach is applied to navigate BVI persons indoors. In the preferred embodiments of the instant invention, the indoor map building approach uses sighted users' DB of collected routes and an audio-tactile guiding interface. After a navigational experience is complete, BVI user's feedback is used to evaluate, improve, and rate navigational route information in the web cloud DB, see
In the third modality, while using the ETA system for navigation, BVI users can get online assistance in complex, unanticipated indoor orientation and navigation situations, lit is designed to help BVI users when they are in unanticipated situations that the pre-computed routes and ETA guidance cannot resolve (i.e. deviating too far from the prescribed route, encounter unrecognized obstacles, etc.) while navigating indoors in the second operational ETA modality. In those situations, BVI users can manually, using voice commands or EMG signals, switch from the second to the third modality to resolve the problem and later return to the second operational ETA modality and continue on the guided navigational route (see
In the third modality, a web crowd assistance interface is activated. That is, BVI users via the ETA system's mobile application can initiate a communication session with a selected sighted user. The ETA system sends the selected sighted user the BVI cameras' views, last successful location ID, sensory information, building evacuation map or other building interior layout schemes. Being provided all this information, the selected sighted user can make real-time suggestions in complex situations and help resolve the problem.
Thus, the ETA system's web crowd assistance interface ensures (i) real-time mobile connection between BVI and sighted users, (ii) delivery of BVI user's sensors information to the sighted user, (iii) delivery of third parties information concerning layout of the buildings to the sighted user The designed mobile application is a technical interface to transfer the information mentioned above and communication between a BVI user and a sighted user.
Thereby, sighted users can help interpret the route, current position, obstacles, or other surrounding complex circumstances by using information sent from web cloud DB and from the BVI ETA system's cameras. While the current camera view is provided to the registered sighted user, it is often not enough for the sighted user to make meaningful supporting decisions because they need to understand the contextual grand view. Therefore, sighted users can get additional information from the ETA system and online DB about BVI's current or last confirmed position on the current route map. The ETA system can also provide time, speed, and movement direction since the last confirmed position, suggesting where the BVI user went astray and where the BVI user is currently located, Additionally, sighted users can retrieve budding floor schemes and other relevant information from the third parties collected in the online DB.
Although the invention has been briefly described by way of preferred modalities, it is to be understood that various other adaptations and modifications may be made within the spirit and scope of the invention.
A wearable system is configured to help navigate blind and visually impaired (BVI) people 300 in an indoor environment. It comprises passive sensors (RGB camera, IMU, GPS, light detector, EMG) and active sensor (3D-ToF-IR camera) 110-150 (Mg. 1), which observe the environment in real-time. Sensors 110-150, tactile display 520, bone-conducting earphones 510, and myographs 110, together with a controller 600, are implemented in a wearable and comfortable headband 300. Processed and interpreted environmental real-time data is sent back to the user in the form of an instructive audio-tactile interface with the help of the bone-conductive headphones 510 and a specially designed vibrotactile display 520. A hands-free control interface is implemented using a myographic input 110, in which facial muscles are used to send operational commands to the system, or speech recognition.
BVI users can also interact with the ETA system using hand gestures 210, which are captured by the BVI user's camera and interpreted by the ETA machine learning algorithms 800, one of which can recognize hand gestures that signify to pinpoint smaller visual regions for an object's closer examination, zooming, or text recognition in the indicated area (see Mg. 1).
A smartphone, configured with this ETA system application 240 and a web cloud server 700 are coupled to the system controller 600. Machine learning and computational vision processes 800 recognize and couple the multi-sensory data into meaningful patterns. Useful objects, specific user-defined objects, and scenic views are depicted and interpreted using deep neural networks or artificial intelligence methods. Multi-sensory data is used to (I) find target objects, (ii) locate obstacles, and (iii) infer users' location in an indoor environment for navigation.
The system can be used in three modalities (
In the second modality, BVI users 300 can choose a building and desired inside destination from the web cloud DB 900. For that reason, the system provides a statistically validated indoor route using information gathered in the first modality. In this way, BVI can use navigational instructions to (i) get acquainted with the chosen route and (ii) use it while orientating and navigating indoors. Machine learning and robot navigation 800 are innovatively adapted for this task. After the trip, BVI users' feedback is used to evaluate, improve, and rate navigational route information in the web cloud DB 900, see
BVI users 300 can get online audio help in complex, unanticipated indoor situations in the third modality. Using smartphones 240 and the web crowd-assisted interface 400, BVI can call a sighted user 400 who is familiar with that building and indoor route.
Sighted users help interpret route, current position, and visual information sent in real-time from (i) the web cloud DB 900 and (ii) the BVI system's cameras 120, 150, see
According to
In the third modality (
In the absence of WiFi, a mobile Internet connection is used from the BVI user's 300 smartphone 240. Otherwise, the 4G/5G wireless communication installed in the system controller 600 is used for data exchange with the web cloud server 700, as well as for the possibility to contact a sighted user 400.
To meet BVI user's needs for indoor navigation and orientation, the ETA system's hardware and software can integrally operate in three modalities, see
Modalities 801-803 can be interpreted as the ETA system's basic working regimes, each characterized by a default set of eight differently activated operational modes 810-890, see
Modes can work in three different ways—basic, background, or not active (see
Thus, in each modality 801-803, only one basic mode can be activated (see
It is important to note that in the ETA system, all three modalities 801,802, and 803 work in coordination with one and another. First of all, it is crucial that Modality #1801 is used for collecting indoor navigational route data by sighted users 400. Modality #1 utilizes the ETA system's functionality that includes specially designed sighted users' environment in the mobile application, which transmits navigational and semantic data for processing and storing in the web cloud database 900 (see
In the preferred embodiment, sighted users gather route information by walking indoors and step-by-step semantically commenting on points of interest (like stairs, entries, exits, doors, WC, etc.) using the ETA system in Modality #1 (see
In the first modality 801, sighted users 400 go through buildings and gather indoor route information to be stored in an online DB 900 where the machine learning processes 800 take place in a web cloud server 700 (see
In the second modality 802. the web cloud servers 700 navigational route information (stored in online DB 900) is used by BVI users 300 for indoor navigational purposes in a chosen building.
Based on the user's preferences (fastest, shortest, stair-free, most used, best rated, most recent, or other options) the machine learning software 800 suggests the best route. The ETA system provides analyzed, semantically enriched, interpreted, and statistically validated indoor routes using information gathered in the first modality. In this way, BVI users can use navigational instructions to (i) get acquainted with the chosen route and (ii) orient themselves and navigate indoors. Machine learning 800 and robot navigation approaches are innovatively adapted for this task.
After the trip, BVI users' 300 feedback is used to evaluate, improve, and rate navigational route information in the web cloud DB 900. BVI users 300 can make additional comments and mark location IDs. Such feedback helps to estimate the route and improve its validity by the end-users, BVI 300. For that reason, ant colony optimization algorithms (SWARM intelligence) can be adapted. This probabilistic technique solves computational problems, which can be reduced to finding the right paths through graphs. That is, sighted users and BVI serve as SWARM agents who help the ETA system find the best routes.
In the third modality 803, while using the ETA system for navigation, BVI users 300 can get online web crowd assistance in complex, unanticipated indoor orientation situations such as getting lost or encountering unexpected obstacles. The ETA system can be used in real-time to get voice-guided help (through bone-conductive headphones) 510 from a sighted user 400 who is familiar with the particular route or building. It is important to note that modes#1-7 810-870 (see
Sighted users' contact information is contained in the online DB if they volunteer to consult BVI in complex situations while traveling through routes known to the sighted user. A preferred sighted user is familiar with the indoor ETA guiding system and with some indoor routes they have traveled. Therefore, such sighted users are selected by the ETA guiding system. Their contacts are sent via the mobile application 245 to the BVI user to provide verbal assistance interpreting routes, current position (if they are lost), obstacles, or other surrounding complex circumstances on the route. Suppose a sighted user, who is well familiar with the particular route, is not available for immediate online communication. In that case, using the mobile application, the ETA system provides to the BVI user a list of other sighted users who can be contacted based on the rankings obtained from the BVI users' rating feedback.
A selected sighted user is provided with relevant information from web cloud DB 900 and the BVI ETA system's cameras 120, 150, which provides the current camera view to the sighted user 400. The information provided to the selected sighted users 400 may include the BVI user's current or last confirmed position on the current route map. The ETA system can also provide time, speed, and movement direction since the last confirmed position, suggesting where a BVI user went astray and is currently located. Besides, sighted users can retrieve building floor schemes and other relevant information from the third parties collected in the web cloud DB 900. Equipped with all this information at hand, sighted users can provide useful navigational support for the BVI users 300 in complex indoor situations, especially if the system can select those sighted users 400 who are familiar with that building or route.
The composition of hardware elements provides hands-free commands input via EMG 110 or microphone 500 and environment information feedback transfer to forehead via vibrotactile display 520. Control of these hardware elements is closely related to software, see
In the first modality (see
In the first modality, route updates are continually sent from the sighted users and BVI users, which works through social networking via the mobile application 245.
Therefore, various changing indoor situations like renovations, furniture movements, closed doors, and the like can be recorded and updated continually in the web cloud DB 900.
In the controller array 162, a system controller 600 may be application-oriented integrated circuits (ASIC), low-cost ARM-based microcontroller, small single-board computer, or other embedded computers. An antenna 172 is used for communication with the input device 270 (switching board 230 or smartphone 240),
The sensor array 163 includes an RGB camera 120. a 3D-ToF-IR camera 150, electromyograph (EMG) 110, an inertial measurement unit (IMU) 130, and a light detector 140.
Passive sensors passively collect environmental data, whereas active sensors like 3D-ToF-IR camera 150 emits IR light to estimate distances to the objects. The sensors 163 observe the environment in real-time and send data via the controller 600 to the machine learning processing 800 on the web cloud server 700, where feature extraction, recognition, and storage occur in the web cloud database 900.
The RGB camera 120 is used for color image input, which is in turn, used to detect a set of trained object classes that are essential to BVI users (such as corridor, door, elevator, stairs). The color images are also used to detect and recognize faces wherein a list of recognized faces can be managed by a user. The RGB camera 120 is also used to accept color images and provide a textual description of the depicted scene, This textual information is provided to the BVI user through the audio channel 510 (headphone).
The IMU 130 may comprise an accelerometer, a gyroscope, a magnetometer, and/or an acceleration or positioning sensor. The lMU 130 may be utilized to determine the positioning of the user and/or the cameras 120, 150. The system continually tracks IMU 130 information, which allows tracing the BVI user 300 route back to the last known location ID. While navigating indoors with ETA guiding system when BVI person 300 gets lost or disoriented, the system can make a dead reckoning, i.e., guide the user to the last known location ID place.
The light detection 140 in the sensor array 163 can provide additional information about the BVI user's environment. For example, a light detector may be used to assess the level of illumination in the environment during graphical information processing.
The interface array 164 includes a microphone 500, a headphone 510, a tactile display 520, a smartphone 240, and optionally, a switching board 230.
In the third modality, when a BVI user 300 encounters complicated indoor navigation situations like (i) deviation from the chosen route, (ii) unpredicted obstacles that cannot be avoided, (iii) missing next location ID, and so on, then the BVI user 300 can initiate a communication session (using smartphone 240 or other devices) with a selected sighted user 400 for online assistance to resolve the problem. During the communication session, the sighted user 400 can obtain almost real-time access to the BVI user's 300 camera 120,150 views. In a preferred embodiment, a BVI user 300 initiates the mobile application's web-assisted interface, which can transfer the ETA system's camera video streams to the BVI user's mobile phone 240 and then through the GSM connection to the selected sighted user's mobile phone. In other embodiments, it is possible to make transmission directly from the BVI user's wearable system's integrated GSM wireless communication module to the remote sighted user's mobile phone. The BVI user can select a sighted user to contact from a ranked list provided in the mobile application, A sighted user familiar with that indoor location (for instance, he/she participated in mapping that indoor terrain in Modality #1) is on the top of the list.
The microphone 500 is a device capable of receiving voice commands and letting the ETA system work in the hands-free mode in the second and third modalities.
Referring to
The system controller 600 can output a pulse-width modulated signal
Referring to
The antenna 172 may be one, or more antennas, capable of transmitting and receiving wireless communications. For example, the antenna 172 may be a Bluetooth, Wi-Fi antenna, and/or mobile telecommunication antenna (e.g., fourth or fifth generation (4G, 5G)).
Referring to Mg. 5, ETA's system user sets one of three modalities (801, 802, or 803) with basic and background operating modes (810-890). The choice of system operation depends on the user. In the indoor guiding ETA system, there are two types of users:
During operation, the BVI user 300, using the input module 610, selects one of the systems operating modalities 620. An ETA's system user can select one of three operation modalities (
The wireless communication module 630 transmits the video and/or audio information together with the operation flag of the system variant 631 to server 700 for further processing. The wireless communication module 630 uses an active mobile phone 240 or a selected GSM hardware wireless communication module with 4G-connection to exchange information between the server 700 and other system modules, The information received 632 from the server 700 is processed to isolate the operating variant and activate the corresponding output module 640: tactile display 520 or bone-conducting headphones 510; in some of the cases, it may be both modules. It is then determined whether the BVI user 300 tries to change the basic mode or quit the modality. If nothing is selected, the ETA system continues the operation in the same modality. If the user makes a selection, the server determines whether the BVI user 300 wants to end the operation or to select another operation modality.
The ROB camera 120 and 3D-ToF-IR camera 150 provide information about the wearable device's depth and distance—it is also possible to better identify the elements of the environment around the user in this way. Combining the IMU 130 and the cameras 120,150 is beneficial because the combination can provide more accurate feedback to the user. The light detector 140 will help evaluate the room's lighting and make appropriate adjustments when processing the image.
The ROB camera 120 may consist of the control part which can be connected via cable 121 (for flexibility) to the lens 122, as shown in FIG. 9.
The EMG sensors 110, 111 illustrated in
The input module can receive input data streams such as hand gestures 210, EMG 110 signals, verbal commands using a microphone 500, and a switching board 230 on a BVI white cane 250. Vibrational motors 523 of the tactile display 520 are arranged perpendicular to the forehead of the BVI user 300 (see
A unique, hands-free command and entry-confirmation interface is offered. The designed ETA system can recognize BVI users' 300 predefined commands by EMG signals through electromyographic sensors 110, 111. The ETA system uses machine learning algorithms 800 to learn each users EMG control signals commands. The system captures signals and reports to the user about detected and recognized commands via bone-conducting headphones 510 or tactile display 520. If the right command is received, the BVI user 300 confirms the control command either using another EMG signal (110, 111), voice command (500), or nodding his head (in the latter case, EMU 130 parameters are captured). Then, the ETA system is ready to accept the next command via the EMG sensors 110, 111. The command sequence and type can be encoded individually by each BVI user 300.
The mobile interface (mobile phone 240) is also used to enter verbal information. In the first modality, sighted users 400 send comments to the web cloud server 700 about obstacles or other information that may be useful to BVI users 300 about the route. In the third modality, the mobile interface is used by the BVI user 300 to contact a selected sighted user to receive support in complex and unanticipated situations.
In the case of entering verbal commands, a microphone 500 operates through which commands are received in the system controller 600 and then the commands are passed to the web cloud server 700 to execute selected commands.
With the switching board 230, commands are transmitted to the system controller 600 via a wireless interface (such as Bluetooth, Wi-Fi, or 4G/5G communication networks), and are further routed to the web cloud server 700 (if necessary) for the execution of selected commands.
The user output module 640 consists of an audio information transmission device, such as a bone-conductive headphone 510 and a vibrotactile display 520 (see
Depending on the operation modality, the output devices can operate individually or together. For instance, when the ETA system is in the second modality (BVI navigation mode), guiding directional information is transmitted to the user via audio voice (headphone 500) and tactile display 520 in a mutually coordinated manner.
The software system is implemented on a binary platform (wearable device-system controller 600 and server) and encapsulates software elements listed below.
Sensor drivers: sensor interface 110-150 consists of software for reading systems sensor data.
User input 610 and output 640 modules compose a cantral interface, which allows to:
Such control interface utilizes drivers for (i) a switching board 230, EMG 110, microphone 500, and gestures 210 as control input methods 610 and (ii) audio bone-conductive earphones 510 and tactile display 520 as control feedback output methods 640, see
Network interfaces: wearable controller network interface 625, wireless communication 630, and server network interface 710 make data connections between wearable and remote system components.
Smartphone app 245 is used by BVI users in the first modality to configure the main parameters of the ETA system. It is also used by the BVI persons and sighted users in the third modality (see
Modality software: The ETA system's operation modalities are implemented as modality software 630, 720 based on software libraries (modules), describing the system's operation at different scenarios (e.g., see
Modes modules. A set of computer vision, navigation, and social networking support modules correspond to different basic modes 800. The user can activate these modes via a control interface 610.
Additionally, the object detection component is used to memorize and recognize places (e.g., end of the trajectory) and objects, which help navigation in the corresponding route (see
All the modules stream their outputs through a TCP/IP network, which is accessible to other required components. The outputs of the modules mentioned above are processed via “text-to-speech” and other algorithms and presented via audio-tactile display to the BVI user, which is calibrated by using Algorithm 3.
Note that modules 1-7 rely on mostly known algorithmic approaches (such as object detectors, scene captioning, etc.), and we do not include their detailed technical description since they are out of the scope of this invention. However, some of these algorithms also include essential innovations, which are described further.
Algorithm 1: CNN object detector data augmentation algorithm.
An object detection data augmentation algorithm is described herein, which includes information about object-like structures in the training data and helps make training more efficient. This algorithm allows for training object detector neural networks faster, which is important for the present invention. For example, in
Algorithm 2: Imitation-learning based controller for BVI user navigation.
Since manual labeling of images is a time-consuming process, an algorithm for the automated pairing of camera images with motion classes extracted from raw wearable IMU data is described herein. This algorithm is configured to automatically label training sets for the training of imitation-learning controllers, whose output corresponds to three movement classes (“forward”, “left”, “right”) and a prediction reliability estimate, which is important to our application:
2.1 Raw IMU time series of movement classes (“forward”, “left” “right”) are collected by separately executing the corresponding movement.
2.2 We use a convolutional LSTM classifier with four outputs (the first three outputs correspond to classes “′forward”, “left”, and “right”, and the fourth one is prediction reliability estimate). So tmax activation is used for the first three outputs, and sigmoid activation is used for prediction reliability estimates, The model is trained using modified cross-entropy (BICE) loss (1):
where yc is the ground truth label, and yc(x) is the corresponding class prediction (probability).
This transformation is required to represent rectangles in camera frames in tactile display vibro motor activations. We assume that the tactile display's coordinate frame is located at the top-left element (see
where Wp and H1 are image width and height respectively and Wm and Kr, are numbers of vibromotors in columns and rows of the tactile display, and brackets [ ]” mean rounding up to the nearest integer operator.
Algorithm 4: Electromyography (EMG) command classifier.
This module accepts as input time-series data obtained from electromyography sensors and classifies these time series into a set of classes, which correspond to user commands. The classifier is based on convolutional LSTM RNN architecture, which allows working with unfiltered signals (filtering is implemented in the first layers of RNN and is adaptively learned in an end-to-end manner during model learning).
When the ETA system is in the second modality (BVI navigation Mode#7), guiding directional information is transmitted to the BVI user 300 via audio voice (headphone 510) and tactile display 520 in a mutually coordinated manner. Thus, the system controller 600 starts the vibrating motors 523 one after another according to navigational movement direction—a running wave is formed as shown in
Meanwhile, composite moving waves are meant to prepare BVI users 300 for the coming turning points (similarly to auto cars GPS navigators that are showing prospective turnings ahead of time). With EMG's 110 help, BVI users can choose a distance of X1=[2;15] meters for the ETA guiding system to warn about coming turning points ahead of time. It allows BVI users 300 to anticipate and prepare for future changes of movement directions.
According to
According to
According to
We also foresee additional functionality - a zooming option of the central view. It could help BVI feel via tactile display more details of the chosen object, i.e., its heigh, form, and distance.
The indoor navigation system comprising methods for indoor routing in the first modality: In the presented system, buildings' indoor objects and routes are first explored and recorded by sighted sighted users using the ETA system and interface (see
This works through social networking when relatives, neighbors, friends, and other people voluntarily and periodically use the ETA system to record indoor routes most is important for BVI, see
Therefore, even various daily changing indoor situations like renovations, furniture movements, dosed doors, and the like can be recorded and updated continually by sighted users through social networking in the web cloud DB. Sighted users and the ETA system can either guide around the obstacle or suggest another route to continue. Thus, is the presented web crowd-assisted innovative method enables BVI users to get the latest information about indoor routes' suitability.
Thus, in the web cloud DB, routes are analyzed, summarized, and enhanced using sighted users's records of multisensory data (location points' visual and semantic ID) and third-parties information (e.g., building floor plans, indoor maps, etc). The best statistical options for successful navigation are estimated each day in the web cloud DB, using deep neural networks or other artificial intelligence-based methods, see
After each guiding route's practical experience, BVI users (and correspondingly ETA system) can rate the route's validity, making personal averaged ratings ascribed to the route (and a sighted user who recorded it). It allows other BVI users to choose from the best-rated routes and to get off-line guidance from the best-rated sighted users.
After sighted users carry out indoor routes mapping with the ETA system in the first modality (see
In the navigational mode, the BVI user's wearable ETA system generates a video and sensory data stream provided to the web cloud database and machine learning algorithms for analysis. In this way, objects, location IDs, scenes, and sensory data recognition occur almost in real-time to provide navigational guiding support for the BVI user.
When a BVI user gets lost or disoriented, the ETA system can work in a dead reckoning method, i.e., to guide to the last known location ID place (see Mg. 15 and Mg. 16). The system continuously tracks accelerometer, magnetometer, gyroscope, and compass information, which allows the BVI user to be traced back to the last known location ID. Such disorientation cases can be recorded, depersonalized, and processed to warn prospective BVI users and improve the route guiding quality.
While navigating indoors with ETA guiding system, the BVI user can approve, make estimates and additions to the route's DB navigation and orientation information (for instance, mark new objects, provide voice comments, make new location IDs, etc.). This information is used for improvements, validation, credibility, and rating of routes. Similarly, the BVI user can add to the route's comments regarding observed difficulties, inaccuracies, and errors.
Wayfinding and navigation indoor services for the BVI population generally have to perform one or more of the following functions: familiarization, localization, route planning, and communicating with the user in a meaningful manner through an accessible interface. The proposed experience-centric BVI user navigation system is wholly configured in such a way to achieve the following benefits:
The above mentioned advantages work well only in the context of the whole ETA system (see
In real-life situations, even regularly updated navigational web cloud databases of indoor routes cannot assure unpredictable and complicated situations caused by accidents, other humans, machines, and BVI persons themselves. Thus, unlike other similar in-kind devices and systems, the present ETA system can provide real-time help in complex situations. For that matter, in the third modality, when a BVI user encounters complicated indoor navigation situation like a) deviation from the chosen route, b) unpredicted obstacles that block the traversed path, c) missing next location ID, and so on, then the BVI user can make a real-time video call to sighted users for online help to resolve the indoor problem. In this feature, sighted users can obtain almost real-time access to the BVI camera's view (see
Before calling a sighted user, the ETA system can propose to the BVI user its way back to the last identified location ID place where the BVI person became lost. For that is reason, the ETA system recalls the recent multi-sensory data stream (walking directions and speed, distances of each straight walking segment) to make guiding instructions back. Machine learning algorithms process the situation, reexamine the route validity, and propose wayfinding guidance to include new location IDs or recognizable objects.
However, if that does not help, the BVI person can call a sighted person for real-‘s time help, using the third modality of the ETA system (see
Such information enables the sighted user to be more informative and better understand the problem's context. That is, it helps to see the problem from the grand perspective view as well as saving mobile connection time and making assisting efforts more effective.
It is important to note that the ETA system can provide a ranked list of sighted users who are most familiar with the place or problem the BVI user is facing,
The ETA system scans visual and sensory data to give feedback and help navigate to the next location ID place or the destination while a sighted user guides the BVI user.
The above-described web-crowd-assisted social networking support methods have been described in detail with particular reference to certain preferred aspects thereof, but it will be understood that variations, combinations, and modifications can be effected by a person of ordinary skill in the art within the spirit and scope of the invention.
Proceedings of the 2017 IEEE 7th Annual Computing and Communication Workshop and Conference (CCWC), Las Vegas, NV, USA, 9-11 Jan. 2017; pp. 1-6.
8. Raufi, B., Ferati, M., Zenuni, X., Ajdari, J., Ismaili, I. F. Methods and techniques of adaptive web accessibility for the blind and visually impaired. Procedia - Social and Behavioral Sciences, vol. 195, 2015, pp. 1999-2007.