The disclosure relates generally to systems, methods, and devices that incorporate an optical sensor to both detect and identify objects or people in an environment around a user. Typically, the users are vision impaired and rely at least partially on sensory input other than sight to navigate their environment. The systems, methods, and devices disclosed herein are directed toward providing sensory input to a vision impaired user to help vision impaired users navigate their environment.
Visual impairments have long been a problem for humans, especially in severe cases where vision impairment is total such that a person is totally blind. Even in cases where a person does not suffer from total blindness, blurred vision can also cause significant stress or injury to a person who cannot adequately assess the environment around them. Historically, visual impairment was so vexing that it was considered a miracle for a person’s sight to be restored. Many historical texts refer to miracles that were performed in restoring sight to the blind as a manifestation of divine providence. Mankind, has also relied on technological attempts to improve the sight of people experiencing visual affliction.
In ancient history those who suffered visual impairment were typically led around by placing their hand on the shoulder or arm of a friend or relative or a stick held in front of a user to detect objects in the user’s path. It was not until the Greco-Roman era that the first evidence of visual aid devices were developed and those were merely corrective devices for those whose affliction was biologically suitable to be corrected. For example, the human eye includes a lens which allows light to pass to light receptors in the back of the eye. This light is transmitted by nerves, essentially, to the brain for interpretation and provides sight. Very simply put, afflictions that affect the ability of the eye to focus light on the light receptors in the eye have traditionally been correctable, or at least improvable, through technological supplement. Afflictions that cause disruption between in the nerves between the light receptors in the eye and the brain have not been historically correctable. There are a host of afflictions that cause blindness and this is not intended to be a complete list of those afflictions. Rather, this is a simplified explanation of at least one problem that results in correctable vision problems and currently un-correctable vision problems.
The earliest vision correction or improvement articles came about with the development of a convex lens. Convex lenses could be used to focus light for the eye, magnify visual images, and offset imperfections in the shape of a person’s eye that caused visual impairment. Convex lenses came about in at least a practical sense approximately a thousand years ago and were implemented in glasses, telescopes, magnifying lenses, monocles, and other similar developments to enhance visual acuity for those that are afflicted with light focusing problems in the eye or, those with naturally normal eyesight to see more than what is visible through natural eyesight. Glasses became the most popular form of visual acuity improvement technology and have been improved in more recent times by the invention of bifocals, trifocals, lenses that address astigmatism, polarization, and a host of other ailments.
However, for those that suffer from afflictions that have disruption in the nerves between the eye and the brain or other similar problem, cannot be helped by glasses or other currently available technological developments for correcting the lens shape and light focusing ability of the eye, simply because the eye is not the source of the blindness in these individuals. Thus, many of the people with these afflictions are limited to being escorted by a friend or relative to interpret the world around them or use a cane, traditionally a white cane, to detect objects in front of them by moving the cane left and right, and sometimes up and down, in their path to detect objects before the person walks into objects in their path. Many of these individuals can move around their homes based on knowing relative distances between objects, such as a table and a couch. However, in more unfamiliar areas, it may be difficult for these people to move confidently, without fear they will trip or hurt themselves walking into objects they are not aware of.
It is therefore one object of this disclosure to provide a system which provides an optical sensor which gives audible feedback to a user for the detection and identification of obstacles in their path. It is another object of this disclosure to provide a device which includes tactile feedback that provides informational messages to a user via a braille printer. It is another object of this disclosure to provide a device which includes tactile feedback in the form of vibration to the user once an object is detected. It is a further object of this disclosure to provide the system and the devices to operate in tandem for providing a user with tactile feedback and audible feedback for detecting and identifying obstacles or people in their path. It is a further object of this disclosure for providing a method for identifying and detecting obstacles or people in a user’s path using the system and devices disclosed herein.
Disclosed herein is a system, device, and method for detecting and identifying objects in the path of a visually impaired person using an optical sensor.
Non-limiting and non-exhaustive implementations of the present disclosure are described with reference to the following figures, wherein like reference numerals refer to like parts throughout the various views unless otherwise specified. Advantages of the present disclosure will become better understood with regard to the following description and accompanying drawings:
In the following description, for purposes of explanation and not limitation, specific techniques and embodiments are set forth, such as particular techniques and configurations, in order to provide a thorough understanding of the device disclosed herein. While the techniques and embodiments will primarily be described in context with the accompanying drawings, those skilled in the art will further appreciate that the techniques and embodiments may also be practiced in other similar devices.
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. Wherever possible, the same reference numbers are used throughout the drawings to refer to the same or like parts. It is further noted that elements disclosed with respect to particular embodiments are not restricted to only those embodiments in which they are described. For example, an element described in reference to one embodiment or figure, may be alternatively included in another embodiment or figure regardless of whether or not those elements are shown or described in another embodiment or figure. In other words, elements in the figures may be interchangeable between various embodiments disclosed herein, whether shown or not.
Server computer 105 may connect to an application 140 executed by personal device 135. Personal device 135 may be implemented as a smart phone, a tablet, a laptop computer, a desktop computer, a music storage and playback device, a personal digital assistant, or any other device incorporating a processor which is capable of implementing a software application that may interact with, control, and provide information to server computer 105 or haptic controller 165, as will be discussed below. Personal device 135 may incorporate one or more of a camera 145, an optical sensor 150 (which may be incorporated as LIDAR technology in one example), a gyro or accelerometer 155, and a global positioning sensor 160, each of which may be controlled by application 140. LIDAR refers to “light detection and ranging” or “laser imaging, detection, and ranging” technology.
Personal device 135 may include hardware that connects personal device 135 to the Internet for the purpose of sharing information between a server device, such as server computer 105, and a haptic controller 165 using any appropriate communication protocol. For example, connections between personal device 135 and server computer 105 or haptic controller 165 may be implemented using Wi-Fi, ZigBee, Z-Wave, RF4CE, Ethernet, telephone line, cellular channels, or others that operate in accordance with protocols defined in IEEE (Institute of Electrical and Electronics Engineers) 802.11, 801.11a, 801.11b, 801.11e, 802.11 g, 802.11h, 802.11i, 802.11n, 802.16, 802.16d, 802.16e, or 802.16 m using any network type including a wide-area network (“WAN”), a local-area network (“LAN”), a 2G network, a 3G network, a 4G network, a 5G network, a Worldwide Interoperability for Microwave Access (WiMAX) network, a Long Term Evolution (LTE) network, Code-Division Multiple Access (CDMA) network, Wideband CDMA (WCDMA) network, any type of satellite or cellular network, or any other appropriate protocol to facilitate communication. Haptic controller 165 may be implemented within a cane 180 and may operate a micro-braille printer 170 and a vibration unit 175.
System 100, which will be discussed in significant detail below, may for brief purposes of introduction use an optical sensor, such as a camera 145, and or optical sensor 150 that uses LIDAR technology to view an environment around a user of system 100. This information may be transmitted to server computer 105 and interpreted by artificial intelligence core 115 to identify and detect objects in the users environment. Images detected through camera 145 or optical sensor 150 individually or collectively may be provided to server computer 105 for processing by artificial intelligence core 115 and processor 110. Artificial intelligence core 115 may be trained for detecting objects in images, or images assembled together as a video stream provided by camera 145 or optical sensor 150. Object detection may occur as a result of comparing objects in the images or video stream to training data stored in AI data 130. Artificial intelligence core 115 may further identify faces of known individuals. The detected location of the objects may be communicated to the user by an audible warning from personal device 135. For example, if a car is exiting a grocery store parking lot, optical sensor 150, for example, may detect that a car is within a predetermined or undetermined distance from a user walking on a sidewalk that crosses the exit to the grocery store parking lot. Application 140 executed by a processor in personal device 135 may cause an audible warning to be emitted of the nature of “A vehicle has entered your path 25 feet ahead of you.” Personal device 135 may further transmit warning information to haptic controller 165 and cause vibration unit 175 in cane 180 to vibrate as a warning and/or may cause micro braille printer 170 to print a warning, in braille, that informs the user that a vehicle has entered the user’s path. While this is a simple explanation of system 100 for introductory purposes, system 100 may incorporate a host of other abilities as will be described below using the various devices and methods herein.
As shown in
These parameters for either LIDAR objects or for image objects may include hazard characteristics of an object, such as injury potential, fragility, typical weight, density, dimension in terms of statistical means and variance, and even texture characteristics. The relative degree of hazard for these objects may be identified manually or by artificial intelligence core 115 as object hazard characteristics 525. Thus, the object library may be used to identify similar objects when encountered in the future and identify the most urgent or most dangerous object to the user in the user’s path.
Processor 705 may use data created via custom neural network 720, object database 725, neural network 730, and terrain information 735, which is similar to terrain information 630, shown in
Walking speed is affected by terrain and, thus, terrain information 925 may be provided to processor 905 as well as gyroscope data 930 and accelerometer data 935 from gyro/accelerometer 155 to estimate a user’s gait, posture, and walking speed. Terrain information 925 may also identify personal space boundaries for assessing whether or not a detected object is within a user’s path (e.g., inside or outside the user’s personal space or extended personal space boundaries).
Memory database 1130 may further provide information to hazard detector 1135 which may cause a hazard alert 1140 to be transmitted by personal electronic device 135. Hazards may include objects that cause obstructions, collision likelihoods, a presence of fragile objects, incoming traffic, and other similar hazards encountered by people inside and outside of their homes. Memory database 1130 may also provide data stored within memory database 1130 to comfort estimator 1145, which may also be informed by personal space data 1120, to produce comfort level data 1150. Comfort estimator 1145 may be implemented by a processor that determines comfort levels for sustaining a particular activity. For example, when walking the comfort level of a user may depend on a crowd density, unevenness of surfaces, and familiarity with a particular place. Comfort level 1150 may be generated continuously or with a high periodicity to properly assess the number and type of objects in the path of a user.
Due to the incorporation of LIDAR technology, much more precise three dimensional data may be measured over short to long range regardless of environment or lighting conditions. Thus, personal electronic device 135 may be used to read text, recognize faces, identify products, recognize barcodes, recognize money, identify objects, identify colors, identify environments, and assist with street navigation, public transportation, and access to parks and recreation.
The foregoing description has been presented for purposes of illustration. It is not exhaustive and does not limit the invention to the precise forms or embodiments disclosed. Modifications and adaptations will be apparent to those skilled in the art from consideration of the specification and practice of the disclosed embodiments. For example, components described herein may be removed and other components added without departing from the scope or spirit of the embodiments disclosed herein or the appended claims.
Other embodiments will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the invention being indicated by the following claims.
This application claims priority to and benefit under 35 U.S.C. 119 to U.S. Provisional Pat. Application No. 63/246,058 filed on Sep. 20, 2021 which is incorporated herein by reference in its entirety, including but not limited to those portions that specifically appear hereinafter, the incorporation by reference being made with the following exception: In the event that any portion of the above-referenced applications are inconsistent with this application, this application supersedes said portion of said above-referenced applications.
Number | Date | Country | |
---|---|---|---|
63246058 | Sep 2021 | US |