This application claims priority to Chinese Patent Application No. 201910770229.6 filed on Aug. 20, 2019, the contents of which are incorporated by reference herein.
The subject matter herein generally relates to aids for disabled persons, especially relates to a navigation method for blind person and a navigation device using the navigation method.
In the prior art, the blind can use sensors to sense road conditions. However, navigation functions of such sensors are generally short ranged.
Implementations of the present disclosure will now be described, by way of embodiments, with reference to the attached figures.
It will be appreciated that for simplicity and clarity of illustration, where appropriate, reference numerals have been repeated among the different figures to indicate corresponding or analogous elements. In addition, numerous specific details are set forth in order to provide a thorough understanding of the embodiments described herein. However, it will be understood by those of ordinary skill in the art that the embodiments described herein can be practiced without these specific details. In other instances, methods, procedures, and components have not been described in detail so as not to obscure the related relevant feature being described. Also, the description is not to be considered as limiting the scope of the embodiments described herein. The drawings are not necessarily to scale and the proportions of certain parts may be exaggerated to better illustrate details and features of the present disclosure.
The present disclosure, including the accompanying drawings, is illustrated by way of examples and not by way of limitation. Several definitions that apply throughout this disclosure will now be presented. It should be noted that references to “an” or “one” embodiment in this disclosure are not necessarily to the same embodiment, and such references mean “at least one”.
The term “module”, as used herein, refers to logic embodied in hardware or firmware, or to a collection of software instructions, written in a programming language, such as, Java, C, or assembly. One or more software instructions in the modules can be embedded in firmware, such as in an EPROM. The modules described herein can be implemented as either software and/or hardware modules and can be stored in any type of non-transitory computer-readable medium or other storage device. Some non-limiting examples of non-transitory computer-readable media include CDs, DVDs, BLU-RAY, flash memory, and hard disk drives. The term “comprising” means “including, but not necessarily limited to”; it specifically indicates open-ended inclusion or membership in a so-described combination, group, series, and the like.
Exemplary embodiments of the present disclosure will be described in relation to the accompanying drawings.
In one embodiment, the storage 16 stores collections of software instructions, which are executed by the processor 15 of navigation device 1 to perform functions of following modules. The function modules include an acquiring module 101, a recognizing module 102, an output module 103, a determining module 104, and a reminding module 105. In another embodiment, the acquiring module 101, the recognizing module 102, the output module 103, the determining module 104, and the reminding module 105 are a program segment or code embedded in the processor 15 of the navigation device 1.
The acquiring module 101 acquires images around a user by the camera unit 11, and acquires a position of the navigation device 1 by the positioning unit 12. In one embodiment, the camera unit 11 can be a 3D camera, for example, the camera unit 11 can be a 360-degree panoramic 3D camera. In one embodiment, the positioning unit 12 can be a GPS device. The acquiring module 101 acquires the position of the navigation device 1 by the GPS device.
The recognizing module 102 recognizes the images to determine a road condition and an object therein, and correlates the images including the road conditions with the position of the navigation device 1, stores the images including the road conditions and the position of the navigation device 1 in a database. The road conditions include distances between objects and the camera unit 11, and azimuths between the object and the camera unit 11.
In one embodiment, the acquiring module 101 acquires three-dimensional images by the 3D camera. The recognizing module 102 recognizes the road condition from the three-dimensional images includes: splitting each of the three-dimensional images into a deep image and a two-dimensional image, recognizing an object of the two-dimensional image, and calculating the distance between the object and the 3D camera, and the azimuth between the object and the 3D camera by a time of flight (TOF) calculation. In one embodiment, the recognizing module 102 compresses the images including the road condition by an image compression method, correlates the images including the road conditions with the position of the navigation device 1, stores the images including the road conditions and the position of the navigation device 1 in the database. In one embodiment, the image compression method includes, but is not limited to, an image compression method based on MPEG4 encoding, and an image compression method based on H.265 encoding.
In one embodiment, the three-dimensional images include color information and depth information of each pixel, and the recognizing module 102 integrates the color information of each pixel of the three-dimensional images into the two-dimensional image, and integrates the depth information of each pixel of the three-dimensional images into the depth image. The recognizing module 102 can recognize an object of the two-dimensional image by an image recognition method, and calculates a distance between the object and the 3D camera, and the azimuth between the object and the 3D camera by the TOF calculation. In one embodiment, the image recognition method can be an image recognition method based on a wavelet transformation, or a neural network algorithm based on deep learning.
The output module 103 outputs images of the objects, the distances between the objects and the camera unit 11, and the azimuths between the object and the camera unit 11.
For example, the distance between the object and the camera unit 11 output by the output module 103 can be 8 meters (m), and the azimuth between the object and the camera unit 11 output by the output module 103 can be 10 degrees with the object being located in front of and to the right of the camera unit 11.
The determining module 104 determines whether the object is an obstacle according to the distance between the object and the camera unit 11, and the azimuth between the object and the camera unit 11.
In one embodiment, the object can be an obstacle, including, but not limited to, a vehicle, a pedestrian, a tree, a step, or a stone. In one embodiment, the determining module 104 analyzes the user's line of movement track according to the location from the positioning unit 12, determines a direction based on the distance between the object and the camera unit 11, and the azimuth between the object and the camera unit 11, determines an angle between the user's movement track and the direction, determines whether the angle between the user's movement track and the direction is less than a preset angle, and determines that the object is an obstacle when the angle between the user's movement track and the direction is less than the preset angle. In one embodiment, the preset angle can be 15 degrees.
The reminding module 105 outputs a warning, including the distance between the camera unit 11 and the obstacle, to the user by the output unit 13. In one embodiment, the output unit 13 can be a voice announcer or a vibrator device.
In one embodiment, the reminding module 105 searches a first road condition of a target location which is within a preset distance from the user from the database, and prompts the user to re-plan his line of movement when the first road condition reveals obstacles or roads that are not suitable for the user, by the output unit 13. In one embodiment, the preset distance can be 50 m or 100 m. In one embodiment, the roads not suitable for the blind user are waterlogged, icy, or gravel-covered roads. In one embodiment, the sensing unit 14 of the navigation device 1 can sense an unknown object having a sudden appearance around the user, and warn the user as to the unknown object by the voice announcer or the vibrator when the unknown object is sensed. In one embodiment, the unknown object can include a falling rock, or a vehicle bearing down on the user.
In one embodiment, the reminding module 105 acquires a second road condition of the target location which is within the preset distance from the user by the camera unit 11, determines whether the second road condition is identical with the first road condition, and stores the second road condition of the target location in the database to replace the first road condition. For example, the reminding module 105 can search the first road condition of the target location which is 60 m away from the camera unit 11 from the database, and determine that the first road condition includes a rock on the user's road, and, in acquiring the second road condition of the target location by the camera unit 11, determine that the rock no longer exists in the second road condition. The second road condition of the target location is stored in the database to replace the first road condition.
In one embodiment, the reminding module 105 receives a second target location input by the user, acquires a current location by the positioning unit 12, calculates a path between the second target location and the current location according to an electronic map, acquires the road condition from the database, determines whether the path is suitable for the user according to the road condition, and warns the user when the path is not suitable for the user.
In one embodiment, the reminding module 105 calculates the path between the second target location and the current location by a navigation path optimization algorithm. In one embodiment, the navigation path optimization algorithm includes, but is not limited to, a Dijkstra algorithm, an A-star algorithm, a highway hierarchies algorithm. In one embodiment, the path is not suitable for the user when frequent puddles and uneven surfaces exist along the path between the second target location and the current location.
At block 301, a navigation device acquires images around a user by a camera unit, and acquires a position of the navigation device by a positioning unit. In one embodiment, the camera unit can be a 3D camera, for example, the camera unit can be a 360-degree panoramic 3D camera. In one embodiment, the positioning unit can be a GPS device. The navigation device acquires the position of the navigation device by the GPS device.
At block 302, the navigation device recognizes the images to determine a road condition and an object therein, and correlates the images including the road condition with the position of the navigation device, stores the images including the road condition and the position of the navigation device in a database. The road condition includes a distance between the object and the camera unit, and azimuth between the object and the camera unit.
In one embodiment, the navigation device acquires three-dimensional images by the 3D camera. The navigation device recognizes the road condition from the three-dimensional images includes: splitting each of the three-dimensional images into a deep image and a two-dimensional image, recognizing an object of the two-dimensional image, and calculating the distance between the object and the 3D camera, and the azimuth between the object and the 3D camera by a time of flight (TOF) calculation. In one embodiment, the navigation device compresses the images including the road condition by an image compression method, correlates the images including the road condition with the position of the navigation device, stores the images including the road condition and the position of the navigation device in the database. In one embodiment, the image compression method includes, but is not limited to an image compression method based on MPEG4 encoding, and an image compression method based on H.265 encoding.
In one embodiment, the three-dimensional images include color information and depth information of each pixel, and the navigation device integrates the color information of each pixel of the three-dimensional images into the two-dimensional image, and integrates the depth information of each pixel of the three-dimensional images into the depth image. The navigation device recognizes an object of the two-dimensional image by an image recognition method, and calculates a distance between the object and the 3D camera, and the azimuth between the object and the 3D camera by the TOF calculation. In one embodiment, the image recognition method can be an image recognition method based on a wavelet transformation, or a neural network algorithm based on deep learning.
At block 303, the navigation device outputs the objects of the images, the distance between the object and the camera unit, and the azimuth between the object and the camera unit.
For example, the distance between the object and the camera unit output by the navigation device can be 8 meters (m), and the azimuth between the object and the camera unit output by the navigation device can be 10 degree with the object being located in front of and to the right of the camera unit 11.
At block 304, the navigation device determines whether the object is an obstacle according to the distance between the object and the camera unit, and the azimuth between the object and the camera unit.
In one embodiment, the object can be an obstacle including, but not limited to a vehicle, a pedestrian, a tree, a step, or a stone. In one embodiment, the navigation device analyzes the user's line of movement track according to the location from the positioning unit, determines a direction based on the distance between the object and the camera unit, and the azimuth between the object and the camera unit, determines an angle between the user's track line of action and the direction line, determines whether the angle between the user's movement track and the direction is less than a preset angle, and determines the object is an obstacle when the angle between the user's movement track and the direction is less than the preset angle. In one embodiment, the preset angle can be 15 degrees.
At block 305, the navigation device outputs a warning, including the distance between the camera unit and the obstacle to the user by an output unit. In one embodiment, the output unit can be a voice announcer or a vibrator device.
In one embodiment, the navigation device searches a first road condition of a target location which is within a preset distance from the user from the database, and prompts the user to re-plan his line of movement when the first road condition reveals obstacles or roads that are not suitable for the user by the output unit. In one embodiment, the preset distance can be 50 m or 100 m. In one embodiment, the roads not suitable for the user are the roads on which there are waterlogged, icy, or gravel-covered roads. In one embodiment, the sensing unit of the navigation device is used to sense an unknown object having a sudden appearance around the user, and remind the user the unknown object by the voice announcer or the vibrator when the unknown object is sensed. In one embodiment, the unknown object can include a falling rock, or a vehicle bearing down on the user.
In one embodiment, the method further includes: the navigation device acquires a second road condition of the target location which is within the preset distance from the user by the camera unit, determines whether the second road condition is identical with the first road condition, and stores the second road condition of the target location in the database to replace the first road condition. For example, the navigation device can search the first road condition of the target location which is 60 m away from the camera unit from the database, and determine that the first road condition includes a rock on the user's road, and, in acquiring the second road condition of the target location by the camera unit, and determine that the second road condition doesn't exist the rock, and stores the second road condition of the target location in the database to replace the first road condition.
In one embodiment, the method further includes: the navigation device receives a second target location input by the user, acquires a current location by the positioning unit, calculates a path between the second target location and the current location according to an electronic map, acquires the road condition from the database, determines whether the path is suitable for the user according to the road condition, and warns the user when the path is not suitable for the user.
In one embodiment, the navigation device calculates the path between the second target location and the current location by a navigation path optimization algorithm. In one embodiment, the navigation path optimization algorithm includes, but is not limited to a Dijkstra algorithm, an A-star algorithm, a highway hierarchies algorithm. In one embodiment, the path is not suitable for the user when frequent puddles and uneven surfaces exist along the path between the second target location and the current location.
In one embodiment, the modules/units integrated in the navigation device can be stored in a computer readable storage medium if such modules/units are implemented in the form of a product. Thus, the present disclosure may be implemented and realized in any or part of the method of the foregoing embodiments, or may be implemented by the computer program, which may be stored in the computer readable storage medium. The steps of the various method embodiments described above may be implemented by a computer program when executed by a processor. The computer program includes computer program code, which may be in the form of source code, object code form, executable file or some intermediate form. The computer readable medium may include any entity or device capable of carrying the computer program code, a recording medium, a USB flash drive, a removable hard disk, a magnetic disk, an optical disk, a computer memory, a rad-only memory (ROM), random access memory (RAM), electrical carrier signals, telecommunication signals, and software distribution media.
The exemplary embodiments shown and described above are only examples. Even though numerous characteristics and advantages of the present disclosure have been set forth in the foregoing description, together with details of the structure and function of the present disclosure, the disclosure is illustrative only, and changes may be made in the detail, including in matters of shape, size and arrangement of the parts within the principles of the present disclosure, up to and including the full extent established by the broad general meaning of the terms used in the claims.
Number | Date | Country | Kind |
---|---|---|---|
201910770229.6 | Aug 2019 | CN | national |