The present disclosure relates to a method and a system for autocorrecting and/or teaching braille.
Many systems have been developed to help sighted individuals with the task of writing and learning to write. The number of these systems has been increasing with the advance of technology. For example, autocorrect functions have been added to many applications such as word processing, email, calendar events and the like. These autocorrect functions may detect when a typed word is incorrectly spelled and may automatically correct the spelling within the application. Some autocorrect functions may also perform autofill functions such that the system predicts what a user is writing as the user types and automatically completes the word for the user.
Other systems include aids to help a student learn to write. These systems may observe the student typing and detect when a word is incorrectly spelled. In response to the detected incorrect spelling, the system may provide feedback including the correct way to spell the word.
These tools are useful but unfortunately cannot help blind individuals in the same manner that they can help sighted individuals. Blind individuals read and write using braille which is designed such that the individual can read using his sense of touch instead of vision. Braille includes various combinations of protrusions within a 3 by 2 cell with each combination of protrusions corresponding to a letter of the alphabet. Thus, in order to write braille, the user may write with the reading/writing substrate upside down (so the characters are reverse-oriented as written) so that the user can “poke” the protrusions into the paper and flip the substrate over in order to feel them. Some blind people start at the right side of the page and write in a forward direction (first letter of each word written first) or in a reverse direction (last character of each word written first). Because blind people comprise a minority of the population and because braille can be written in various manners, less time and effort has been spent developing systems for autocorrecting and/or teaching braille.
Thus, there is a need for systems and methods for teaching and/or autocorrecting braille.
What is described is a braille teaching/autocorrecting system. The braille teaching/autocorrecting system includes a camera configured to detect image data corresponding to at least one braille character. The braille teaching/autocorrecting system also includes a processor coupled to the camera and configured to identify the at least one braille character based on the image data and determine feedback data based on the identification of the at least one braille character.
Also described is a braille teaching/autocorrecting system. The braille teaching/autocorrecting system includes a sensor configured to detect braille data corresponding to at least one reverse-oriented braille character. The braille teaching/autocorrecting system also includes a processor coupled to the sensor and configured to identify the at least one reverse-oriented braille character based on the braille data and determine feedback data based on the identification of the at least one reverse-oriented braille character.
Also described is a braille teaching/autocorrecting system. The braille teaching/autocorrecting system includes a sensor configured to detect braille data corresponding to at least one braille character. The braille teaching/autocorrecting system also includes a processor coupled to the sensor. The processor is configured to determine a direction in which the at least one braille character is being written based on the braille data. The processor is also configured to determine an orientation in which the at least one braille character is being written based on the braille data. The processor is also configured to identify the at least one braille character based on the braille data. The processor is further configured to determine feedback data based on the direction, orientation and identification of the at least one braille character.
Other systems, methods, features, and advantages of the present invention will be or will become apparent to one of ordinary skill in the art upon examination of the following figures and detailed description. It is intended that all such additional systems, methods, features, and advantages be included within this description, be within the scope of the present invention, and be protected by the accompanying claims. Component parts shown in the drawings are not necessarily to scale, and may be exaggerated to better illustrate the important features of the present invention. In the drawings, like reference numerals designate like parts throughout the different views, wherein:
The systems and methods described herein provide autocorrecting and teaching functionality to individuals writing using braille. The systems and methods described herein provide several benefits and advantages such as being able to detect braille that is written by a user as the user is writing the braille and determine which braille character the user has written. The systems and methods also provide the advantage of assisting an individual in the learning of braille by providing feedback to the user as the user is writing the braille as well as autocompleting and autocorrecting braille as the user is writing it in order to save the user time while writing the braille. For example, a potential word determination may be made based on detected written characters, a number or percentage of times each word has been written by the user, which word or words fit contextually, whether a character of a misspelled word has locations selected that are similar to another character that is a potential word, whether the user uses a different pattern of writing to write different words or the like. Additionally, the processor is capable of determining feedback, autocompleting and autocorrecting braille when written in a forward or reverse-orientation and a forward or reverse direction, which provides the advantage of being usable by any braille writer regardless of their writing style.
An exemplary system includes a sensor that is capable of detecting braille as it is being written, such as a camera for detecting the location of a user's hand, stylus, etc. or a touchscreen for detecting contact corresponding to locations within a braille cell. The system also includes a memory for storing braille characters and words in reverse or forward-orientations. The system also includes a processor for determining which characters are being written by the user, determining a direction and an orientation of the braille writing and determining feedback to provide to the user based on the determined characters and the direction and orientation of the braille. The system also includes an output device for providing the feedback, such as speakers or a vibration unit.
Most vision impaired individuals are taught to both read and write braille. When writing using braille, multiple braille characters may be positioned adjacent to each other such that the multiple braille characters correspond to a word. A space may be left between words to indicate the separation of words. It is preferable for each braille cell 100 to have the same size so that a reader can quickly scan his finger or fingers over the text without having to adjust for different sized cells.
Special tools have been developed to simplify the process and to improve the quality of written braille.
Before writing braille using the slate 200 and the stylus 202, a piece of paper or other substrate is placed between the first plate 210 and the second plate 212. The plates are then pushed together and may be coupled such that the paper or other substrate cannot easily become removed from the slate 200. The second plate 212 may define rectangular or other shaped holes 204 that correspond with a cell and define the cell size. The first plate 210 includes a plurality of indentions 206 such that six indentions 206 corresponding to the locations of a braille cell align with each rectangular hole 204.
In order to write the braille once the substrate is coupled to the slate 200, an individual may grasp the stylus 202 at the handle 205 and form indentions in the substrate by pressing the tip 203 through each rectangular hole 204 into each indention 206. By repetitively applying this method, a user may write multiple braille characters and multiple words using the slate 200 and the stylus 202.
When writing braille using the slate and stylus method, each braille character should be written in a reverse-orientation (i.e., reversing each location of the braille cell) as indicated in
Additionally, the braille should be written such that the first letter of the word begins on the far right of the line and the last letter is positioned on the far left of the line. As the substrate will be turned over before being read, this ensures that the braille can be read from left to right.
As mentioned above, braille may be written such that each line starts on the right and ends on the left.
The processor 402 may include a computer processor such as an ARM processor, DSP processor, distributed processor or other form of central processing. The processor 402 may be local (i.e., positioned in/on the teaching/autocorrecting device 400), may be remote (i.e., positioned remote from the teaching/autocorrecting device 400), or it may be a pairing of a local and a remote processor. The processor 402 may be capable of determining braille characters that have been and/or are being written by a user based on data detected by the sensor 406. The processor 402 may also be capable of determining feedback to provide to the user based on the determined braille characters.
The memory 404 may include one or any combination of a RAM or other volatile or nonvolatile memory, a non-transitory memory or a data storage device, such as a hard disk drive, a solid state disk drive, a hybrid disk drive or other appropriate data storage. The memory 404 may store machine-readable instructions which may be executed by the processor 402. As with the processor 402, the memory 404 may be local, remote or a pairing of local and remote.
The sensor 406 may include any sensor capable of detecting braille writing over a period of time. For example, the sensor 406 may include a camera capable of detecting the position of a tip of a stylus, the position of a user's hand, indentions in a substrate or the like in order to determine each location selected by a user.
Detection of indentions in a substrate includes different techniques than detection of typical characters on a substrate using an optical character recognition (OCR) system. Typical OCR systems are designed to ignore small marks on a page. This removal would not work for braille, as each indention may be detected and ignored as a small mark. Accordingly, in order to detect indentions, a preferable method may include magnifying the small marks and ignore the larger marks.
In some embodiments, the sensor 406 may include a touchscreen having different areas corresponding to locations of a braille cell, an electronic circuit having areas corresponding to each location of a braille cell such that a touch in each location closes a circuit and indicates a selection of the location, or the like.
The input device 408 may include any input device such as a mouse, a track pad, a microphone, one or more buttons, a pedal, a touchscreen and/or the like. The input device 408 may be adapted to receive input from a user corresponding to a function of the teaching/autocorrecting device 400. For example, a user may toggle between an autocorrect mode or a teaching mode using the input device 408.
The output device 410 may include a speaker, a vibration unit, a display, a touchscreen and/or the like. The output device 410 may output data indicating one or more options for autocorrecting as a user begins to write braille, it may output data indicating whether a user is writing braille correctly, it may output data indicating potential improvements to a user's writing or the like.
Using the teaching/autocorrecting device 400, a user may write braille and the sensor 406 may detect the braille as the user is writing it such that the sensor 406 may detect each selected location of each braille cell as it is selected by the user, it may detect each portion of a braille character, each braille character, each word or group of words or any combination of the above after they are completed. As the user is writing, the processor 402 may predict one or more potential autocorrect words based on what the user has already written. Using the output device 410, the teaching/auto correcting device 400 may output the one or more potential words after the user has written at least one character. Using the input device 408, the user may inform the teaching/autocorrecting device 400 whether a potential word is the correct word. Based on the user's response, the processor 402 completes the word for the user.
The I/O port 412 may include one or more ports adapted to allow communications between the teaching/autocorrecting device 400 and another device. For example, the I/O port 412 may include a headphone jack, a data port, a wireless antenna, a 3G or LTE chip or the like. In some embodiments, one or more of the components of the teaching/autocorrecting device 400 may be positioned remote from the teaching/autocorrecting device 400. These components may communicate with one another and with the onboard components via the I/O port 412. For example, the teaching/autocorrecting device 400 may have an onboard processor and a memory. A camera, a wireless mouse and a speaker may be remote from the teaching/autocorrecting device 400 and coupled to each other and the processor and the memory via the I/O port 412.
In some embodiments, the I/O port 412 may allow communications between a computing device and the teaching/autocorrecting device 400 such that the teaching/autocorrecting device 400 may determine characters, words, sentences or the like and transmit them via the I/O port 412 to the computing device. The computing device may then cause the characters, words, sentences, etc. to become inserted into an application such as email, web browser, word processing or the like.
The braille recognition module 450 is adapted to recognize braille characters as they are being written. The sensor 406 may detect data that corresponds to braille data, such as a set of indentions corresponding to braille cell locations, the location of a user's hand and/or the location of a tip of a stylus, and transmit the detected data to the processor 402. In some embodiments, the braille recognition module 450 may determine selected characters and/or locations by the user simply tapping a stylus on the desired location. The braille recognition module 450 may receive this detected data from the sensor 406 and determine which characters are being written based on the received data. The braille recognition module 450 may be adapted to identify braille characters written in a reverse-orientation, as they would be written by a user, or characters written in a forward-orientation, as they would be read by a user.
The direction determination module 452 may receive the data detected by the sensor 406 and/or the data generated by the braille recognition module 450 and determine whether the characters are being written as forward-oriented or reverse-oriented. The direction determination module 452 may also determine in which direction the braille is being written based on the received data (i.e., if reverse-oriented, whether the braille is being written in a forward direction from right to left or a reverse direction from left to right).
The potential word determination module 454 may be adapted to determine potential words that a user is writing and/or has written. The potential word determination module 454 may receive data including recognized characters from the braille recognition module 450 and/or a direction of the braille characters and/or the braille lines from the direction determination module 452. Based on the received data, the potential word determination module 454 may determine potential words based on partially written words and/or misspelled words.
In the example of
If the same or a similar system detected the characters in
The potential word determination module 454 may be adapted to autocomplete a partially written word and/or correct misspelled words. The potential word determination module 454 may compare detected characters to a database of words to determine which word or words are potential words based on the determined characters. A module of the processor 402 may convert braille characters into other characters (i.e., the English alphabet) before processing (i.e., comparing the detected characters to words in a database) and/or the processor 402 may process the data without converting the braille characters. The potential word determination may be made based on detected written characters, a number or percentage of times each word has been written by the user, which word or words fit contextually, whether a character of a misspelled word has locations selected that are similar to another character that is a potential word, whether the user uses a different pattern of writing to write different words or the like.
In some embodiments, the potential word determination module 454 may predict a likelihood of each potential word being the correct word and rank the potential words based on the predicted likelihood. In some embodiments, the processor 402 may only output the highest ranked potential word. The user may then provide feedback via the input device 408 and/or the sensor 406 indicating whether the potential word is the correct word. In some embodiments, the potential word determination module 454 may output a plurality of potential words (such as the 3 highest ranked potential words, the 5 highest ranked potential words or the like). The user may select one of the potential words using the input device 408 and/or the sensor 406. For example, the user may tap the stylus, hold his hand in a certain position, click a mouse or the like in response to hearing the correct word to indicate that the most recent potential word is the correct word.
The feedback determination module 456 may be adapted to determine feedback to provide to the user based on data from the other modules. For example, if the user has selected a learning mode, the feedback determination module 456 may provide feedback every time a user writes a word correctly and/or incorrectly. The feedback may include a correct spelling of the word, correct dot placement within the cell for each character, an indication that the word is spelled wrong and the user should try again, that the sentence structure is incorrect, that incorrect punctuation is used or the like.
The feedback determination module 456 may also determine which, if any, potential words to provide to the user when the teaching/autocorrecting device 400 is in the autocorrect mode. For example, the feedback determination module 456 may include a user selectable setting (or a programmed setting) regarding a number of potential words (N) for the user to receive. The feedback determination module 456 may determine to output the N highest ranked potential words and provide data to the output device 410 causing the output device 410 to output the N highest ranked potential words.
The feedback determination module 456 may also determine a format in which the feedback will be provided to the user. The teaching/autocorrecting device 400 may include more than one output device 410, such as a speaker and/or a vibration unit. The feedback determination module 456 may determine whether to provide audio feedback and/or haptic feedback based on a number of factors. These factors may include which type of sensor 406 is being used, an ambient sound detected around the teaching/autocorrecting device 400, whether the user is in contact with the vibration unit, if the user has selected a preference or the like.
The braille cell size determination module 458 may be adapted to determine a size of the braille cells that the user is writing in based on data detected by the sensor. This determination may be made in different ways. For example, the teaching/autocorrecting device 400 may include two elongated members that the user may place together in an “L” shape that indicate the size of the braille cell. In some embodiments, the user may place two fingers together in an “L” shape to indicate the size of the braille cell. The sensor 406 may detect the “L” shape and the braille cell size determination module 458 may determine that the braille cell size is defined by a rectangle defined by an area within the “L” shape.
For example, the user may draw a rectangle using the stylus, a pen, a pencil or the like on a piece of paper to indicate the size of the braille cell. In embodiments where the sensor 406 is a touchscreen or an electronic circuit, the user may draw a rectangle and/or an “L” shape using a finger, a stylus, a pencil, a pen or the like indicating the size of the braille cell. In some embodiments, the user may begin to write and the processor 402 may determine the cell size based on the locations in which the user is selecting.
The smart necklace 500 may include a processor 502, a memory 504, a camera 506, a button 508, a first output unit 510A, a second output unit 510B and a battery (not shown). The smart necklace 500 may include any combination of the above components and/or may include additional components not illustrated. The processor 502 and the memory 504 may be similar to the processor 402 and the memory 404 of the teaching/autocorrecting device 400 and may be positioned on the smart necklace 500. The camera 506 may be any camera capable of detecting image data, such as the location of a user's hand, the location of a tip 556 of a stylus 554, an indentation on a substrate 550 or the like.
As an exemplary use of the smart necklace 500, a blind user wearing the smart necklace 500 may define a cell area on a substrate 550 by outlining a rectangle defining a cell 551 with the tip of the stylus 554. The camera 506 may detect this outline and the processor 502 may determine that the outline corresponds to a cell size. The processor may then divide the cell 551 into the 6 locations 552A, 552B, 552C, 552D, 552E and 552F of the cell 551.
The user may then begin to write in braille on the substrate 550 by touching the tip 556 of the stylus 554 to the substrate 550, by pushing the tip 556 of the stylus 554 into the substrate 550 causing indentions or the like. The substrate 550 may be a piece of paper, a sheet of metal or plastic or any other substrate. As the tip 556 of the stylus 554 approaches each location within the cell of the substrate 550, the processor 502 may determine that the location has been selected based on data detected by the camera 506. For example, the processor 502 may determine that a location has been selected when the image data received from the camera 506 indicates that the tip 556 is within a predetermined distance of the substrate 550. The tip 556 may be colored with a predetermined color so that the processor 502 can easily determine the location of the tip 556 relative to the substrate 550. In some embodiments, the processor 502 may determine that a location has been selected based on a detected position or location of the user's hand and/or detect indentions in the substrate 550 formed by the stylus 554.
The user and/or the processor 502 may define the cell size so that only one cell can fit on a surface 553 of the substrate 550 or so that many cells can fit on the surface 553 of the substrate 550. In
The processor 502 may store the character in the memory 504 in response to identifying the character. The processor 502 may be adapted to determine when the present character is complete based on detected data from the camera 506 such as a particular action being performed with the stylus 554, a tap of the user's finger, a selection of locations that corresponds with a character but cannot correspond to another character with additional selected locations or the like. The processor 502 may also be adapted to determine when a word is complete based on a detected space between cells, a particular action being performed with the stylus 554, a tap of the user's finger or the like. These features are particularly advantageous when the cell size is such that only one cell 551 can fit on the surface 553 so that the user can quickly proceed to the next cell and/or the next word.
The button 508 is an input device configured to receive an input from the user. The user may select an autocorrect mode or a teaching mode, may turn the smart necklace 500 on or off or further manipulate the functionality of the smart necklace 500 using the button 508. In some embodiments, the smart necklace 500 may include more than one button and/or a different input device, such as a toggle switch, a haptic strip, a touchscreen or the like.
The output units 510A and 510B may each include a speaker and/or a vibration unit. In some embodiments, the output units 510A and 510B may include both a speaker and a vibration unit. In some embodiments, the smart necklace 500 includes only one output unit which may provide audio and/or haptic output.
The touchscreen 606 may be adapted to receive user input via contact from a portion of a user's body, a stylus or another device. The touchscreen 606 may be divided into a location 612A, 612B, 612C, 612D, 612E and 612F that correspond to locations of a braille cell. Contact may be made with each location to indicating a selection of the location. After selecting desired locations, the user may indicate that he or she is finished with the character by performing an action, such as double tapping the touchscreen 606, swiping his finger in a particular manner or the like.
The microphone 607 may be an input device. A user may select between modes of the smart mobile device 600 by verbally speaking a command. A user may also indicate completion of a word, a desire to start a new sentence or another word processing request by verbally indicating the request. The microphone 607 may detect the speech data and transmit it to the processor 602, which may in turn determine the request and perform it.
The processor 602 may be adapted to determine a direction in which the user is writing and/or an orientation in which the user is writing. For example, a first user may write an F by selecting location 612A, location 612B and location 612D. In response, the processor 602 may determine that the user is writing in a forward-orientation. Another user may be used to writing braille as he would with a slate and stylus and instead write the letter F by selecting location 612A, 612D and 612E. The processor 602 may determine that this individual is writing in a reverse-orientation. After one or more characters have been entered by the user, the processor 602 may also determine in which direction the user is writing. The processor 602 may autocorrect and/or teach braille based on the detected input as well as the determined direction and orientation of the writing. The processor 602 may also determine feedback to be provided to a user via the speaker 610.
The memory 604 may store each selected location, character, finished word or the like as it is completed. The selected locations/characters/words/etc. may be displayed on a display, such as the touchscreen 606 or another display, or they may be later accessed by the processor 602 for printing, emailing, additional word processing or the like.
The processor 702 may receive this detected image data and perform autocorrecting and/or teaching functions based on the detected image data. In the embodiment illustrated in
In the embodiment illustrated in
A user may input data to the laptop 700 using the track pad 708 and/or the keyboard 709. In some embodiments, the user may be able to select modes of the laptop 700 using the track pad 708 and/or the keyboard 709. For example, a user may select between an auto correct mode and a teaching mode, between a basic teaching mode and an advanced teaching mode, etc.
The memory 704 may store selected locations, written characters and/or written words and the processor 702 may perform functions with the stored writing, such as send a message as an email or a text, cause the characters and/or words to be output by the display 711, etc. For example, as the user writes braille on the substrate 715, the laptop 700 may display the corresponding characters on the display 711 using a braille format and/or an alphabetical format.
A teacher or other observer may be able to observe the progress of a user via the display 711. For example, in a teaching mode, the display 711 may indicate how well the user is learning by displaying a percentage correct, a number of misspelled words and/or miswritten characters, etc.
The button 858 may be used as an input device for operation of the mobile device 850. The speaker 860 may be configured to provide audio data based on signals received from the processor 852. The display 870 may be adapted to output image data based on signals received from the processor 852. The antenna 851 may be coupled to the processor 852 and/or an I/O port and be capable of transmitting and receiving signals from another device having an antenna. The imaging unit 803 may include a camera 806, a connection means 801, a connector 805 coupling the camera 806 to the connection means 801 and an antenna 812.
The connection means 801 may be any connection means capable of connecting to a writing base such as a substrate, a clipboard, a notebook or the like. In some embodiments, the connection means 801 may be a clip, a bracket, a snap connector, a patch of hook and loop fasteners, commonly available under the tradename Velcro™, or the like.
The connector 805 may couple the connection means 801 to the camera 806. In some embodiments, the connector 805 may be partially flexible such that the location and direction of focus of the camera 806 may be physically changed by repositioning the connector 805. When the camera 806 is in a desired position, the connector 805 may resist movement until a sufficient force is exerted on the connector 805. In this way, a user may place the camera 806 in a position in which the camera can optimally detect image data associated with a user writing braille. The processor 852 may detect image data and determine a preferred position of the camera 806. The processor 852 may then instruct the speaker 860 to output feedback instructing the user on the preferred position of the camera 806.
The antenna 812 may be capable of communicating with the antenna 851 as indicated by the connection 829. Image data detected by the camera 806 may be transmitted to the mobile device 850 via the antenna 812 and received at the mobile device 850 via the antenna 851.
In some embodiments, the camera 806 may have a relatively small field of view (FOV) 815. This may allow the camera 806 to more accurately detect image data within the field of view of the camera 806. In the embodiment illustrated in
As illustrated, the FOV 815 includes the entire top surface 831 of the substrate 814. However, the FOV 815 does not include much space beyond the top surface 831 of the substrate 814. This allows the camera 806 to detect more detailed image data proximate the top surface 831 of the substrate 814.
The camera 806 may be positioned a distance 821 above the substrate 814. The distance 821 may be selected such that an angle 823 formed between a line perpendicular to the top surface 831 and a center of focus 825 of the camera 806 is relatively small. This allows the camera 806 to detect the location of a finger, a stylus or the like relative to the top surface 831 of the substrate 814 without obstructions, such as the user's hand, finger or the like.
The system 800 may also include a first longitudinal member 816 and a second longitudinal member 818. The longitudinal members 816 and 818 may be adapted to be altered such that a longitudinal distance 820 and a longitudinal distance 822 of the longitudinal members 816 and 818 may be changed. A user may place the longitudinal members 816 and 818 on the substrate 814 adjacent and perpendicular to each other. The camera 806 may detect the longitudinal members 816 and 818 and the processor 852 may determine the size of each cell 824 based on the longitudinal distance 820 and the longitudinal distance 822.
The cell 824 illustrates an area having a height 827 and a width 828 that corresponds to the longitudinal distance 820 and the longitudinal distance 822 of the longitudinal members 816 and 818. By setting the dimensions of the cell 824 using the longitudinal member 816 and the longitudinal member 818, a user may select any desirable cell size for the braille cells.
If the teaching mode is selected, the method 900 proceeds to block 904. In block 904, the processor may determine a word that a user is attempting to write in braille. In some embodiments, the processor may include a list of words associated with a particular learning level, such as basic, intermediate or advanced, and output a word or sentence from the list to the user via an output device so the processor knows which word the user is attempting to write. In some embodiments, a user may input a word or a group of words that he or she would like to write so that the processor knows which word the user is attempting to write. In some embodiments, the processor may not determine a word that a user is attempting to write before the user begins to write. For example, the user may wish to practice writing and start writing in braille. As the user is writing, the processor may determine the word that the user is attempting to write as the user is writing the word or after the user has written the word.
In block 906, the sensor detects data corresponding to at least one braille character. The sensor may then transmit the detected data to the processor which may determine which braille character is written based on the detected sensor data. In some instances, the processor may not be capable of determining which braille character has been written until at least two characters have been written, as one braille character written in a reverse-oriented manner may be the same as another braille character written in a forward-oriented manner. In these instances, the method 900 may proceed or it may remain at this block until the character can be determined.
In block 908, the processor may determine a direction that the word is being written and/or the orientation of the braille character. This may be determined based a known word (if the word the user is attempting to write is known), on a comparison of the written character to forward-oriented and/or reverse-oriented braille characters, on the location of the initial braille character or the like.
In some embodiments, the user may enter braille characters one at a time on a single cell (such as using the smart mobile device 600). The processor may determine the direction that the word is being written based on the orientation of the character and how the word is being spelled. For example, if the user is attempting to spell “car,” the processor may determine that the word is being written in a forward direction if the user writes the braille character for “C” first and may determine that the word is being written in a reverse direction if the user first writes the character for “R.” In other embodiments, the device may include a forward/reverse setting such that the user can select whether the writing will be in the forward or reverse direction and/or the forward or reverse orientation.
In block 910, the processor determines if the at least one braille character corresponds to the correct spelling of the word that the user is attempting to write. As the processor detects each braille character, it may compare the detected characters to the correct spelling of the word. This can be done in any writing style—forward or reverse writing and/or forward or reverse-oriented characters.
In order to determine if the word is misspelled, the processor needs to know the direction in which the word is being written and the orientation of the braille characters. In some embodiments, more than one braille character must be recognized before this determination can be made. With brief reference to
The processor may also compare the selected locations of each written character to the correct locations corresponding to the correct writing of the character. The processor can determine if incorrect locations are selected and/or if correct locations are not selected for the character. This can be done in any writing style—forward or reverse writing and/or forward or reverse-oriented characters. If the user selects locations that do not match the correct locations, the processor may determine to provide feedback informing the user of the incorrect locations.
In block 912, the processor may determine feedback to be provided to the user based on whether the at least one braille character corresponds to the correct spelling of the word and/or the correct locations are selected for each character. In some embodiments, the output device may give a verbal indication of whether the spelling is correct or not, a verbal indication of whether one or more incorrect locations have been selected and/or feedback indicating correct locations that have not been selected. In some embodiments, the output device may provide haptic feedback indicating whether the at least one braille character corresponds to the correct spelling of the word. The processor may determine to provide feedback as each wrong letter is inserted or may determine to provide feedback after the entire word, phrase, sentence, etc. is written.
Returning to block 902, if it is determined that the auto correct mode has been selected, the method 900 proceeds to block 914. In block 914, the sensor may detect data corresponding to at least one braille character. This data may be transmitted to the processor which may then determine which character is being written.
In block 916, the processor may determine a direction that the selected word is being written based on the sensed data. This determination may be made based on the location of the at least one braille character and/or the orientation of the at least one braille character. Block 916 may function in a similar manner as the block 908.
In block 918, the processor may determine at least one potential word that the user may be trying to spell using the at least one braille character. This determination may be made based on a comparison of the detected at least one braille character to a database to determine a match. In some embodiments, the processor uses an algorithm to select the potential words that is based on a number of times each potential word has been used, whether a potential word fits contextually with previously-written words or sentences, whether a potential word fits grammatically (i.e., if the previous words refer to a plural subject, then the verb may be a plural verb) or any other suitable factors.
In some embodiments, the memory may include a database corresponding to both orientations of the braille characters. The memory may also include a database including forward and reverse spellings of words. Once the processor knows the orientation and direction in which the word is being written, it may compare the detected braille character to at least one of the databases to determine which character is being written and potentially which word is being written. In some embodiments, the memory may include only one database that corresponds to one direction of spelling and one orientation. In these embodiments, the processor may convert the at least one braille character to match the one direction of spelling and one orientation and then compare the converted at least one braille character to the database.
The processor may begin to select potential words after the first braille character has been recognized, after a predetermined number of characters have been recognized, whether a potential word has a high likelihood of being correct or the like. The processor may determine any number of potential words based the detected at least one braille character and it may rank the potential words based on the likelihood that they are the selected word.
In some embodiments, the processor may correct the spelling of words after they have already been written. For example, if the user spells “cars” as “caes,” the processor may compare the letters “caes” to a database to determine partial matches and the likelihood of each partial match being correct. The likelihood may be determined based on a number of matching characters, the location of each character within the word, the context of previous words written by the user, the similarity of dot patterns between the written characters and potential characters or the like.
In block 920, after the processor has determined at least one potential word, the processor may determine to provide feedback to the user based on the at least one potential word. In some embodiments, the feedback is provided based on the likelihood that at least one potential word is the correct word. For example, the processor may determine not to provide feedback until at least one potential word has a likelihood at or above a predetermined percentage. The processor may determine whether to only provide one potential word or multiple potential words based on the number of potential words, the likelihood of each potential word being the correct word, a user preference, another algorithm or the like. The processor may also determine whether to provide verbal feedback, haptic feedback or another type of feedback (such as visual feedback for a teacher or parent).
In some embodiments, a braille device capable of forming words and/or characters in braille (i.e., raising certain locations in at least one braille cell) is coupled to the processor. In these embodiments, the processor may generate one or more potential letters and/or words to be output by the braille device so that the user may touch the braille device and read the potential word or words.
In some embodiments, the processor may determine, based on user input or an algorithm, that the word should be autocorrected without user feedback. For example, if the likelihood that a potential word is the correct word is above a predetermined threshold, then the processor may determine to insert the word. In these embodiments, the processor may determine whether or not to provide feedback indicating that the word has been changed.
In block 922, the processor may receive an indication of whether one of the at least one potential words is the correct word. In some embodiments, the processor may provide only one word as feedback and the user may indicate whether it is the correct word by using the input device, by performing a gesture to be detected by the sensor, by speaking a “yes” or “no,” or the like. In some embodiments, the processor may determine to provide more than one word as feedback and provide them one at a time such that the user can select the correct word as it is being output.
In block 924, if the user indicated that a potential word is the correct word, the processor may complete and/or correct the selected word. For example, the memory may store each character and word that the user is writing and then store the additional letters that spell the selected word and/or replace the misspelled word with the correctly spelled word. The processor may also place the correct spelling in an email, text, word processing document or the like.
If the input by the user indicated that no potential word is the correct word, the method 900 may return to block 914. In some embodiments, the processor may continue to output potential words until a potential word is selected by the user.
In cell 1024, the user may incorrectly write a reverse-oriented “E.” At this point, the processor may determine that the user has incorrectly spelled the word “CAR.” The processor may then indicate to the user that the user has incorrectly spelled the word and/or may indicate to the user that the user has written the braille character corresponding to a reverse-oriented “E” instead of “C.” In some embodiments, the processor may provide feedback to the user informing the user of the correct way to write the braille character corresponding to a reverse-oriented “C,” such as by generating feedback indicating the correct locations that correspond to the letter “C.”
Similarly, in blocks 1056 and 1058, the user may input the braille characters corresponding to the letters “E” and “H.” At this point, the processor may determine a list of potential words including the word “THE.” The device may then provide feedback indicating that a potential word is “THE.” In response, the user may provide an indication that “THE” is the correct word. The device may then input a reverse-oriented “T” into the cell 1060.
The user may then write the braille characters corresponding to the letters “C,” “A” and “W” in cells 1076, 1078 and 1080. In response, the processor may look up “C A W” in a database to determine if the letters correspond to a correct spelling of a word. The processor may determine that this string of characters does not correspond to a word in the database. In response, the processor may determine a potential word that the user could have meant instead of the letters “C A W.” The processor may determine that “CAR” is a potential word and generate feedback indicating that “CAR” is a potential word. The user may respond to the feedback with an indication that “CAR” is correct. In response, the processor may replace the braille character corresponding to the reverse-oriented “W” in cell 1080 with a braille character corresponding to a reverse-oriented “R.”
Exemplary embodiments of the methods/systems have been disclosed in an illustrative style. Accordingly, the terminology employed throughout should be read in a non-limiting manner. Although minor modifications to the teachings herein will occur to those well versed in the art, it shall be understood that what is intended to be circumscribed within the scope of the patent warranted hereon are all such embodiments that reasonably fall within the scope of the advancement to the art hereby contributed, and that that scope shall not be restricted, except in light of the appended claims and their equivalents.
Number | Name | Date | Kind |
---|---|---|---|
2850812 | Mannheimer | Sep 1958 | A |
4520501 | DuBrucq | May 1985 | A |
4586827 | Hirsch et al. | May 1986 | A |
4786966 | Hanson | Nov 1988 | A |
5047952 | Kramer | Sep 1991 | A |
5097856 | Chi-Sheng | Mar 1992 | A |
5129716 | Holakovszky et al. | Jul 1992 | A |
5233520 | Kretsch et al. | Aug 1993 | A |
5265272 | Kurcbart | Nov 1993 | A |
5463428 | Lipton et al. | Oct 1995 | A |
5508699 | Silverman | Apr 1996 | A |
5539665 | Lamming et al. | Jul 1996 | A |
5543802 | Villevielle | Aug 1996 | A |
5544050 | Abe | Aug 1996 | A |
5568127 | Bang | Oct 1996 | A |
5636038 | Lynt | Jun 1997 | A |
5659764 | Sakiyama | Aug 1997 | A |
5701356 | Stanford et al. | Dec 1997 | A |
5733127 | Mecum | Mar 1998 | A |
5807111 | Schrader | Sep 1998 | A |
5872744 | Taylor | Feb 1999 | A |
5953693 | Sakiyama | Sep 1999 | A |
5956630 | Mackey | Sep 1999 | A |
5982286 | Vanmoor | Nov 1999 | A |
6009577 | Day | Jan 2000 | A |
6055048 | Langevin et al. | Apr 2000 | A |
6067112 | Wellner et al. | May 2000 | A |
6199010 | Richton | Mar 2001 | B1 |
6229901 | Mickelson et al. | May 2001 | B1 |
6230135 | Ramsay | May 2001 | B1 |
6230349 | Silver et al. | May 2001 | B1 |
6285757 | Carroll et al. | Sep 2001 | B1 |
6307526 | Mann | Oct 2001 | B1 |
6323807 | Golding et al. | Nov 2001 | B1 |
6349001 | Spitzer | Feb 2002 | B1 |
6466232 | Newell | Oct 2002 | B1 |
6477239 | Ohki | Nov 2002 | B1 |
6542623 | Kahn | Apr 2003 | B1 |
6580999 | Maruyama et al. | Jun 2003 | B2 |
6594370 | Anderson | Jul 2003 | B1 |
6603863 | Nagayoshi | Aug 2003 | B1 |
6619836 | Silvant et al. | Sep 2003 | B1 |
6701296 | Kramer | Mar 2004 | B1 |
6774788 | Balfe | Aug 2004 | B1 |
6825875 | Strub et al. | Nov 2004 | B1 |
6826477 | Ladetto et al. | Nov 2004 | B2 |
6834373 | Dieberger | Dec 2004 | B2 |
6839667 | Reich | Jan 2005 | B2 |
6857775 | Wilson | Feb 2005 | B1 |
6920229 | Boesen | Jul 2005 | B2 |
D513997 | Wilson | Jan 2006 | S |
7027874 | Sawan et al. | Apr 2006 | B1 |
D522300 | Roberts | Jun 2006 | S |
7069215 | Bangalore | Jun 2006 | B1 |
7106220 | Gourgey et al. | Sep 2006 | B2 |
7228275 | Endo | Jun 2007 | B1 |
7299034 | Kates | Nov 2007 | B2 |
7308314 | Havey et al. | Dec 2007 | B2 |
7336226 | Jung et al. | Feb 2008 | B2 |
7356473 | Kates | Apr 2008 | B2 |
7413554 | Kobayashi et al. | Aug 2008 | B2 |
7417592 | Hsiao et al. | Aug 2008 | B1 |
7428429 | Gantz et al. | Sep 2008 | B2 |
7463188 | McBurney | Dec 2008 | B1 |
7496445 | Mohsini | Feb 2009 | B2 |
7501958 | Saltzstein et al. | Mar 2009 | B2 |
7525568 | Raghunath | Apr 2009 | B2 |
7564469 | Cohen | Jul 2009 | B2 |
7565295 | Hernandez-Rebollar | Jul 2009 | B1 |
7598976 | Sofer et al. | Oct 2009 | B2 |
7618260 | Daniel et al. | Nov 2009 | B2 |
D609818 | Tsang et al. | Feb 2010 | S |
7656290 | Fein et al. | Feb 2010 | B2 |
7659915 | Kurzweil et al. | Feb 2010 | B2 |
7743996 | Maciver | Jun 2010 | B2 |
D625427 | Lee | Oct 2010 | S |
7843351 | Bourne | Nov 2010 | B2 |
7843488 | Stapleton | Nov 2010 | B2 |
7848512 | Eldracher | Dec 2010 | B2 |
7864991 | Espenlaub et al. | Jan 2011 | B2 |
7938756 | Rodetsky et al. | May 2011 | B2 |
7991576 | Roumeliotis | Aug 2011 | B2 |
8005263 | Fujimura | Aug 2011 | B2 |
8035519 | Davis | Oct 2011 | B2 |
D649655 | Petersen | Nov 2011 | S |
8123660 | Kruse et al. | Feb 2012 | B2 |
D656480 | McManigal et al. | Mar 2012 | S |
8138907 | Barbeau et al. | Mar 2012 | B2 |
8150107 | Kurzweil et al. | Apr 2012 | B2 |
8177705 | Abolfathi | May 2012 | B2 |
8239032 | Dewhurst | Aug 2012 | B2 |
8253760 | Sako et al. | Aug 2012 | B2 |
8300862 | Newton et al. | Oct 2012 | B2 |
8325263 | Kato et al. | Dec 2012 | B2 |
D674501 | Petersen | Jan 2013 | S |
8359122 | Koselka et al. | Jan 2013 | B2 |
8395968 | Vartanian et al. | Mar 2013 | B2 |
8401785 | Cho et al. | Mar 2013 | B2 |
8414246 | Tobey | Apr 2013 | B2 |
8418705 | Ota et al. | Apr 2013 | B2 |
8428643 | Lin | Apr 2013 | B2 |
8483956 | Zhang | Jul 2013 | B2 |
8494507 | Tedesco et al. | Jul 2013 | B1 |
8494859 | Said | Jul 2013 | B2 |
8538687 | Plocher et al. | Sep 2013 | B2 |
8538688 | Prehofer | Sep 2013 | B2 |
8571860 | Strope | Oct 2013 | B2 |
8583282 | Angle et al. | Nov 2013 | B2 |
8588464 | Albertson et al. | Nov 2013 | B2 |
8588972 | Fung | Nov 2013 | B2 |
8591412 | Kovarik et al. | Nov 2013 | B2 |
8594935 | Cioffi et al. | Nov 2013 | B2 |
8606316 | Evanitsky | Dec 2013 | B2 |
8610879 | Ben-Moshe et al. | Dec 2013 | B2 |
8630633 | Tedesco et al. | Jan 2014 | B1 |
8676274 | Li | Mar 2014 | B2 |
8676623 | Gale et al. | Mar 2014 | B2 |
8694251 | Janardhanan et al. | Apr 2014 | B2 |
8704902 | Naick et al. | Apr 2014 | B2 |
8718672 | Xie et al. | May 2014 | B2 |
8743145 | Price | Jun 2014 | B1 |
8750898 | Haney | Jun 2014 | B2 |
8768071 | Tsuchinaga et al. | Jul 2014 | B2 |
8786680 | Shiratori | Jul 2014 | B2 |
8797141 | Best et al. | Aug 2014 | B2 |
8797386 | Chou et al. | Aug 2014 | B2 |
8803699 | Foshee et al. | Aug 2014 | B2 |
8805929 | Erol et al. | Aug 2014 | B2 |
8812244 | Angelides | Aug 2014 | B2 |
8814019 | Dyster et al. | Aug 2014 | B2 |
8825398 | Alexandre | Sep 2014 | B2 |
8836532 | Fish, Jr. et al. | Sep 2014 | B2 |
8836580 | Mendelson | Sep 2014 | B2 |
8836910 | Cashin et al. | Sep 2014 | B2 |
8902303 | Na'Aman et al. | Dec 2014 | B2 |
8909534 | Heath | Dec 2014 | B1 |
D721673 | Park et al. | Jan 2015 | S |
8926330 | Taghavi | Jan 2015 | B2 |
8930458 | Lewis et al. | Jan 2015 | B2 |
8981682 | Delson et al. | Mar 2015 | B2 |
8994498 | Agrafioti | Mar 2015 | B2 |
D727194 | Wilson | Apr 2015 | S |
9004330 | White | Apr 2015 | B2 |
9025016 | Wexler et al. | May 2015 | B2 |
9042596 | Connor | May 2015 | B2 |
9053094 | Yassa | Jun 2015 | B2 |
9076450 | Sadek | Jul 2015 | B1 |
9081079 | Chao et al. | Jul 2015 | B2 |
9081385 | Ferguson | Jul 2015 | B1 |
D736741 | Katz | Aug 2015 | S |
9111545 | Jadhav et al. | Aug 2015 | B2 |
D738238 | Pede et al. | Sep 2015 | S |
9137484 | DiFrancesco et al. | Sep 2015 | B2 |
9137639 | Garin et al. | Sep 2015 | B2 |
9140554 | Jerauld | Sep 2015 | B2 |
9148191 | Teng et al. | Sep 2015 | B2 |
9158378 | Hirukawa | Oct 2015 | B2 |
D742535 | Wu | Nov 2015 | S |
D743933 | Park et al. | Nov 2015 | S |
9185489 | Gerber et al. | Nov 2015 | B2 |
9190058 | Klein | Nov 2015 | B2 |
9104806 | Stivoric et al. | Dec 2015 | B2 |
9230430 | Civelli et al. | Jan 2016 | B2 |
9232366 | Charlier et al. | Jan 2016 | B1 |
9267801 | Gupta et al. | Feb 2016 | B2 |
9269015 | Boncyk | Feb 2016 | B2 |
9275376 | Barraclough et al. | Mar 2016 | B2 |
9304588 | Aldossary | Apr 2016 | B2 |
D756958 | Lee et al. | May 2016 | S |
D756959 | Lee et al. | May 2016 | S |
9335175 | Zhang et al. | May 2016 | B2 |
9341014 | Oshima et al. | May 2016 | B2 |
9355547 | Stevens et al. | May 2016 | B2 |
20010023387 | Rollo | Sep 2001 | A1 |
20020067282 | Moskowitz et al. | Jun 2002 | A1 |
20020071277 | Starner et al. | Jun 2002 | A1 |
20020075323 | O'Dell | Jun 2002 | A1 |
20020173346 | Wang | Nov 2002 | A1 |
20020178344 | Bourguet | Nov 2002 | A1 |
20030026461 | Arthur Hunter | Feb 2003 | A1 |
20030133008 | Stephenson | Jul 2003 | A1 |
20030133085 | Tretiakoff | Jul 2003 | A1 |
20030179133 | Pepin et al. | Sep 2003 | A1 |
20040056907 | Sharma | Mar 2004 | A1 |
20040232179 | Chauhan | Nov 2004 | A1 |
20040267442 | Fehr et al. | Dec 2004 | A1 |
20050020845 | Fink et al. | Sep 2005 | A1 |
20050221260 | Kikuchi | Oct 2005 | A1 |
20050259035 | Iwaki | Nov 2005 | A1 |
20050283752 | Fruchter | Dec 2005 | A1 |
20060004512 | Herbst | Jan 2006 | A1 |
20060028550 | Palmer | Feb 2006 | A1 |
20060029256 | Miyoshi | Feb 2006 | A1 |
20060129308 | Kates | Jun 2006 | A1 |
20060171704 | Bingle et al. | Aug 2006 | A1 |
20060177086 | Rye et al. | Aug 2006 | A1 |
20060184318 | Yoshimine | Aug 2006 | A1 |
20060292533 | Selod | Dec 2006 | A1 |
20070001904 | Mendelson | Jan 2007 | A1 |
20070052672 | Ritter et al. | Mar 2007 | A1 |
20070173688 | Kim | Jul 2007 | A1 |
20070182812 | Ritchey | Aug 2007 | A1 |
20070202865 | Moride | Aug 2007 | A1 |
20070230786 | Foss | Oct 2007 | A1 |
20070296572 | Fein | Dec 2007 | A1 |
20080024594 | Ritchey | Jan 2008 | A1 |
20080068559 | Howell | Mar 2008 | A1 |
20080120029 | Zelek et al. | May 2008 | A1 |
20080144854 | Abreu | Jun 2008 | A1 |
20080145822 | Bucchieri | Jun 2008 | A1 |
20080174676 | Squilla et al. | Jul 2008 | A1 |
20080181501 | Faraboschi | Jul 2008 | A1 |
20080198222 | Gowda | Aug 2008 | A1 |
20080198324 | Fuziak | Aug 2008 | A1 |
20080208455 | Hartman | Aug 2008 | A1 |
20080251110 | Pede | Oct 2008 | A1 |
20080260210 | Kobeli | Oct 2008 | A1 |
20090012788 | Gilbert | Jan 2009 | A1 |
20090040215 | Afzulpurkar | Feb 2009 | A1 |
20090058611 | Kawamura | Mar 2009 | A1 |
20090106016 | Athsani | Apr 2009 | A1 |
20090118652 | Carlucci | May 2009 | A1 |
20090122161 | Bolkhovitinov | May 2009 | A1 |
20090122648 | Mountain et al. | May 2009 | A1 |
20090157302 | Tashev et al. | Jun 2009 | A1 |
20090177437 | Roumeliotis | Jul 2009 | A1 |
20090189974 | Deering | Jul 2009 | A1 |
20090210596 | Furuya | Aug 2009 | A1 |
20100041378 | Aceves | Feb 2010 | A1 |
20100080418 | Ito | Apr 2010 | A1 |
20100109918 | Liebermann | May 2010 | A1 |
20100110368 | Chaum | May 2010 | A1 |
20100179452 | Srinivasan | Jul 2010 | A1 |
20100182242 | Fields et al. | Jul 2010 | A1 |
20100182450 | Kumar | Jul 2010 | A1 |
20100194976 | Smith | Aug 2010 | A1 |
20100198494 | Chao | Aug 2010 | A1 |
20100199232 | Mistry et al. | Aug 2010 | A1 |
20100241350 | Cioffi et al. | Sep 2010 | A1 |
20100245585 | Fisher et al. | Sep 2010 | A1 |
20100267276 | Wu | Oct 2010 | A1 |
20100292917 | Emam et al. | Nov 2010 | A1 |
20100298976 | Sugihara et al. | Nov 2010 | A1 |
20100305845 | Alexandre et al. | Dec 2010 | A1 |
20100308999 | Chornenky | Dec 2010 | A1 |
20110066383 | Jangle | Mar 2011 | A1 |
20110071830 | Kim | Mar 2011 | A1 |
20110092249 | Evanitsky | Apr 2011 | A1 |
20110124383 | Garra et al. | May 2011 | A1 |
20110125735 | Petrou | May 2011 | A1 |
20110181422 | Tran | Jul 2011 | A1 |
20110187640 | Jacobsen | Aug 2011 | A1 |
20110211760 | Boncyk | Sep 2011 | A1 |
20110216006 | Litschel | Sep 2011 | A1 |
20110221670 | King, III et al. | Sep 2011 | A1 |
20110234584 | Endo | Sep 2011 | A1 |
20110246064 | Nicholson | Oct 2011 | A1 |
20110260681 | Guccione | Oct 2011 | A1 |
20110307172 | Jadhav et al. | Dec 2011 | A1 |
20120016578 | Coppens | Jan 2012 | A1 |
20120053826 | Slamka | Mar 2012 | A1 |
20120062357 | Slamka | Mar 2012 | A1 |
20120069511 | Azera | Mar 2012 | A1 |
20120075168 | Osterhout et al. | Mar 2012 | A1 |
20120082962 | Schmidt | Apr 2012 | A1 |
20120085377 | Trout | Apr 2012 | A1 |
20120092161 | West | Apr 2012 | A1 |
20120092460 | Mahoney | Apr 2012 | A1 |
20120123784 | Baker et al. | May 2012 | A1 |
20120136666 | Corpier et al. | May 2012 | A1 |
20120143495 | Dantu | Jun 2012 | A1 |
20120162423 | Xiao et al. | Jun 2012 | A1 |
20120194552 | Osterhout et al. | Aug 2012 | A1 |
20120206335 | Osterhout et al. | Aug 2012 | A1 |
20120206607 | Morioka | Aug 2012 | A1 |
20120207356 | Murphy | Aug 2012 | A1 |
20120214418 | Lee | Aug 2012 | A1 |
20120220234 | Abreu | Aug 2012 | A1 |
20120232430 | Boissy et al. | Sep 2012 | A1 |
20120249797 | Haddick et al. | Oct 2012 | A1 |
20120252483 | Farmer et al. | Oct 2012 | A1 |
20120316884 | Rozaieski et al. | Dec 2012 | A1 |
20120323485 | Mutoh | Dec 2012 | A1 |
20120327194 | Shiratori | Dec 2012 | A1 |
20130002452 | Lauren | Jan 2013 | A1 |
20130038521 | Sugaya | Feb 2013 | A1 |
20130044005 | Foshee et al. | Feb 2013 | A1 |
20130046541 | Klein et al. | Feb 2013 | A1 |
20130066636 | Singhal | Mar 2013 | A1 |
20130079061 | Jadhav | Mar 2013 | A1 |
20130090133 | D'Jesus Bencci | Apr 2013 | A1 |
20130115578 | Shiina | May 2013 | A1 |
20130115579 | Taghavi | May 2013 | A1 |
20130116559 | Levin | May 2013 | A1 |
20130127980 | Haddick | May 2013 | A1 |
20130128051 | Velipasalar et al. | May 2013 | A1 |
20130131985 | Weiland et al. | May 2013 | A1 |
20130141576 | Lord et al. | Jun 2013 | A1 |
20130144629 | Johnston | Jun 2013 | A1 |
20130155474 | Roach et al. | Jun 2013 | A1 |
20130157230 | Morgan | Jun 2013 | A1 |
20130184982 | DeLuca | Jul 2013 | A1 |
20130201344 | Sweet, III | Aug 2013 | A1 |
20130202274 | Chan | Aug 2013 | A1 |
20130204605 | Illgner-Fehns | Aug 2013 | A1 |
20130211718 | Yoo et al. | Aug 2013 | A1 |
20130218456 | Zelek et al. | Aug 2013 | A1 |
20130228615 | Gates et al. | Sep 2013 | A1 |
20130229669 | Smits | Sep 2013 | A1 |
20130243250 | France | Sep 2013 | A1 |
20130245396 | Berman et al. | Sep 2013 | A1 |
20130250078 | Levy | Sep 2013 | A1 |
20130250233 | Blum et al. | Sep 2013 | A1 |
20130253818 | Sanders et al. | Sep 2013 | A1 |
20130265450 | Barnes, Jr. | Oct 2013 | A1 |
20130271584 | Wexler et al. | Oct 2013 | A1 |
20130290909 | Gray | Oct 2013 | A1 |
20130307842 | Grinberg et al. | Nov 2013 | A1 |
20130311179 | Wagner | Nov 2013 | A1 |
20130328683 | Sitbon et al. | Dec 2013 | A1 |
20130332452 | Jarvis | Dec 2013 | A1 |
20140009561 | Sutherland | Jan 2014 | A1 |
20140031081 | Vossoughi | Jan 2014 | A1 |
20140031977 | Goldenberg et al. | Jan 2014 | A1 |
20140032596 | Fish et al. | Jan 2014 | A1 |
20140037149 | Zetune | Feb 2014 | A1 |
20140055353 | Takahama | Feb 2014 | A1 |
20140071234 | Millett | Mar 2014 | A1 |
20140081631 | Zhu et al. | Mar 2014 | A1 |
20140085446 | Hicks | Mar 2014 | A1 |
20140098018 | Kim et al. | Apr 2014 | A1 |
20140100773 | Cunningham et al. | Apr 2014 | A1 |
20140125700 | Ramachandran | May 2014 | A1 |
20140132388 | Alalawi | May 2014 | A1 |
20140133290 | Yokoo | May 2014 | A1 |
20140160250 | Pomerantz | Jun 2014 | A1 |
20140184384 | Zhu et al. | Jul 2014 | A1 |
20140184775 | Drake | Jul 2014 | A1 |
20140204245 | Wexler | Jul 2014 | A1 |
20140222023 | Kim et al. | Aug 2014 | A1 |
20140233859 | Cho | Aug 2014 | A1 |
20140236932 | Ikonomov | Aug 2014 | A1 |
20140249847 | Soon-Shiong | Sep 2014 | A1 |
20140251396 | Subhashrao et al. | Sep 2014 | A1 |
20140253702 | Wexler | Sep 2014 | A1 |
20140278070 | McGavran | Sep 2014 | A1 |
20140281943 | Prilepov | Sep 2014 | A1 |
20140287382 | Villar Cloquell | Sep 2014 | A1 |
20140309806 | Ricci | Oct 2014 | A1 |
20140313040 | Wright, Sr. | Oct 2014 | A1 |
20140335893 | Ronen | Nov 2014 | A1 |
20140343846 | Goldman et al. | Nov 2014 | A1 |
20140345956 | Kojina | Nov 2014 | A1 |
20140347265 | Aimone | Nov 2014 | A1 |
20140368412 | Jacobsen | Dec 2014 | A1 |
20140369541 | Miskin | Dec 2014 | A1 |
20140379251 | Tolstedt | Dec 2014 | A1 |
20140379336 | Bhatnager | Dec 2014 | A1 |
20150002808 | Rizzo, III et al. | Jan 2015 | A1 |
20150016035 | Tussy | Jan 2015 | A1 |
20150058237 | Bailey | Feb 2015 | A1 |
20150063661 | Lee | Mar 2015 | A1 |
20150081884 | Maguire | Mar 2015 | A1 |
20150084884 | Cherradi El Fadili | Mar 2015 | A1 |
20150099946 | Sahin | Apr 2015 | A1 |
20150109107 | Gomez et al. | Apr 2015 | A1 |
20150120186 | Heikes | Apr 2015 | A1 |
20150125831 | Chandrashekhar Nair et al. | May 2015 | A1 |
20150135310 | Lee | May 2015 | A1 |
20150141085 | Nuovo et al. | May 2015 | A1 |
20150142891 | Haque | May 2015 | A1 |
20150154643 | Artman et al. | Jun 2015 | A1 |
20150196101 | Dayal et al. | Jul 2015 | A1 |
20150198454 | Moore et al. | Jul 2015 | A1 |
20150198455 | Chen | Jul 2015 | A1 |
20150199566 | Moore et al. | Jul 2015 | A1 |
20150201181 | Moore et al. | Jul 2015 | A1 |
20150211858 | Jerauld | Jul 2015 | A1 |
20150219757 | Boelter et al. | Aug 2015 | A1 |
20150223355 | Fleck | Aug 2015 | A1 |
20150256977 | Huang | Sep 2015 | A1 |
20150257555 | Wong | Sep 2015 | A1 |
20150260474 | Rublowsky | Sep 2015 | A1 |
20150262509 | Labbe | Sep 2015 | A1 |
20150279172 | Hyde | Oct 2015 | A1 |
20150324646 | Kimia | Nov 2015 | A1 |
20150330787 | Cioffi et al. | Nov 2015 | A1 |
20150336276 | Song | Nov 2015 | A1 |
20150338917 | Steiner et al. | Nov 2015 | A1 |
20150341591 | Kelder et al. | Nov 2015 | A1 |
20150346496 | Haddick et al. | Dec 2015 | A1 |
20150356345 | Velozo | Dec 2015 | A1 |
20150356837 | Pajestka | Dec 2015 | A1 |
20150364943 | Vick | Dec 2015 | A1 |
20150367176 | Bejestan | Dec 2015 | A1 |
20150375395 | Kwon | Dec 2015 | A1 |
20160007158 | Venkatraman | Jan 2016 | A1 |
20160028917 | Wexler | Jan 2016 | A1 |
20160042228 | Opalka | Feb 2016 | A1 |
20160078289 | Michel | Mar 2016 | A1 |
20160098138 | Park | Apr 2016 | A1 |
20160156850 | Werblin et al. | Jun 2016 | A1 |
20160198319 | Huang | Jul 2016 | A1 |
20160350514 | Rajendran | Dec 2016 | A1 |
Number | Date | Country |
---|---|---|
201260746 | Jun 2009 | CN |
101527093 | Sep 2009 | CN |
201440733 | Apr 2010 | CN |
101803988 | Aug 2010 | CN |
101647745 | Jan 2011 | CN |
102316193 | Jan 2012 | CN |
102631280 | Aug 2012 | CN |
202547659 | Nov 2012 | CN |
102929394 | Feb 2013 | CN |
202722736 | Feb 2013 | CN |
102323819 | Jun 2013 | CN |
103445920 | Dec 2013 | CN |
102011080056 | Jan 2013 | DE |
102012000587 | Jul 2013 | DE |
102012202614 | Aug 2013 | DE |
1174049 | Sep 2004 | EP |
1721237 | Nov 2006 | EP |
2368455 | Sep 2011 | EP |
2371339 | Oct 2011 | EP |
2127033 | Aug 2012 | EP |
2581856 | Apr 2013 | EP |
2751775 | Jul 2016 | EP |
2885251 | Nov 2006 | FR |
2401752 | Nov 2004 | GB |
1069539 | Mar 1998 | JP |
2001304908 | Oct 2001 | JP |
2010012529 | Jan 2010 | JP |
2010182193 | Aug 2010 | JP |
4727352 | Jul 2011 | JP |
2013169611 | Sep 2013 | JP |
100405636 | Nov 2003 | KR |
20080080688 | Sep 2008 | KR |
20120020212 | Mar 2012 | KR |
1250929 | Apr 2013 | KR |
WO1995004440 | Feb 1995 | WO |
WO 9949656 | Sep 1999 | WO |
WO 0010073 | Feb 2000 | WO |
WO 0038393 | Jun 2000 | WO |
WO 179956 | Oct 2001 | WO |
WO 2004076974 | Sep 2004 | WO |
WO 2006028354 | Mar 2006 | WO |
WO 2006045819 | May 2006 | WO |
WO 2007031782 | Mar 2007 | WO |
WO 2008008791 | Jan 2008 | WO |
WO 2008015375 | Feb 2008 | WO |
WO 2008035993 | Mar 2008 | WO |
WO 2008096134 | Aug 2008 | WO |
WO2008127316 | Oct 2008 | WO |
WO 2010062481 | Jun 2010 | WO |
WO 2010109313 | Sep 2010 | WO |
WO 2012040703 | Mar 2012 | WO |
WO2012163675 | Dec 2012 | WO |
WO 2013045557 | Apr 2013 | WO |
WO 2013054257 | Apr 2013 | WO |
WO 2013067539 | May 2013 | WO |
WO 2013147704 | Oct 2013 | WO |
WO 2014104531 | Jul 2014 | WO |
WO2014104531 | Jul 2014 | WO |
WO 2014138123 | Sep 2014 | WO |
WO 2014172378 | Oct 2014 | WO |
WO 2015065418 | May 2015 | WO |
WO2015092533 | Jun 2015 | WO |
WO 2015108882 | Jul 2015 | WO |
WO2015127062 | Aug 2015 | WO |
Entry |
---|
Diallo, “Apple iOS 8: Top New Features”, Sep. 18, 2014, Forbes, http://www.forbes.com/sites/amadoudiallo/2014/09/18/apple-ios-8-top-new-features/#780ea24d6c7e. |
Zhang, Shanjun; Yoshino, Kazuyoshi; A Braille Recognition System by the Mobile Phone with Embedded Camera; 2007; IEEE. |
Diallo, Amadou; Sep. 18, 2014; Apple iO58: Top New Features, Forbes Magazine. |
N. Kalar, T. Lawers, D. Dewey, T. Stepleton, M.B. Dias; Iterative Designe of a Braille Writing Tutor to Combat Illiteracy; Aug. 30, 2007; IEEE. |
The Nex Band; http://www.mightycast.com/#faq; May 19, 2015; 4 pages. |
Cardonha et al.; “A Crowdsourcing Platform for the Construction of Accessibility Maps”; W4A'13 Proceedings of the 10th International Cross-Disciplinary Conference on Web Accessibility; Article No. 26; 2013; 5 pages. |
Bujacz et al.; “Remote Guidance for the Blind—A Proposed Teleassistance System and Navigation Trials”; Conference on Human System Interactions; May 25-27, 2008; 6 pages. |
Rodriguez et al; “CrowdSight: Rapidly Prototyping Intelligent Visual Processing Apps”; AAAI Human Computation Workshop (HCOMP); 2011; 6 pages. |
Chaudary et al.; “Alternative Navigation Assistance Aids for Visually Impaired Blind Persons”; Proceedings of ICEAPVI; Feb. 12-14, 2015; 5 pages. |
Garaj et al.; “A System for Remote Sighted Guidance of Visually Impaired Pedestrians”; The British Journal of Visual Impairment; vol. 21, No. 2, 2003; 9 pages. |
Coughlan et al.; “Crosswatch: A System for Providing Guidance to Visually Impaired Travelers at Traffic Intersections”; Journal of Assistive Technologies 7.2; 2013; 17 pages. |
Sudol et al.; “LookTel—A Comprehensive Platform for Computer-Aided Visual Assistance”; Computer Vision and Pattern Recognition Workshops (CVPRW), 2010 IEEE Computer Society Conference; Jun. 13-18, 2010; 8 pages. |
Paladugu et al.; “GoingEasy® with Crowdsourcing in the Web 2.0 World for Visually Impaired Users: Design and User Study”; Arizona State University; 8 pages. |
Kammoun et al.; “Towards a Geographic Information System Facilitating Navigation of Visually Impaired Users”; Springer Berlin Heidelberg; 2012; 8 pages. |
Bigham et al.; “VizWiz: Nearly Real-Time Answers to Visual Questions” Proceedings of the 23nd annual ACM symposium on User interface software and technology; 2010; 2 pages. |
Guy et al; “CrossingGuard: Exploring Information Content in Navigation Aids for Visually Impaired Pedestrians” Proceedings of the SIGCHI Conference on Human Factors in Computing Systems; May 5-10, 2012; 10 pages. |
Zhang et al.; “A Multiple Sensor-Based Shoe-Mounted User Interface Designed for Navigation Systems for the Visually Impaired”; 5th Annual ICST Wireless Internet Conference (WICON); Mar. 1-3, 2010; 9 pages. |
Shoval et al.; “Navbelt and the Guidecane—Robotics-Based Obstacle—Avoidance Systems for the Blind and Visually Impaired”; IEEE Robotics & Automation Magazine, vol. 10, Issue 1; Mar. 2003; 12 pages. |
Dowling et al.; “Intelligent Image Processing Constraints for Blind Mobility Facilitated Through Artificial Vision”; 8th Australian and NewZealand Intelligent Information Systems Conference (ANZIIS); Dec. 10-12, 2003; 7 pages. |
Heyes, Tony; “The Sonic Pathfinder an Electronic Travel Aid for the Vision Impaired”; http://members.optuszoo.com.au/aheyew40/pa/pf_blerf.html; Dec. 11, 2014; 7 pages. |
Lee et al.; “Adaptive Power Control of Obstacle Avoidance System Using Via Motion Context for Visually Impaired Person.” International Conference on Cloud Computing and Social Networking (ICCCSN), Apr. 26-27, 2012 4 pages. |
Wilson, Jeff, et al. “Swan: System for Wearable Audio Navigation”; 11th IEEE International Symposium on Wearable Computers; Oct. 11-13, 2007; 8 pages. |
Borenstein et al.; “The GuideCane—A Computerized Travel Aid for the Active Guidance of Blind Pedestrians”; IEEE International Conference on Robotics and Automation; Apr. 21-27, 1997; 6 pages. |
Bhatlawande et al.; “Way-finding Electronic Bracelet for Visually Impaired People”; IEEE Point-of-Care Healthcare Technologies (PHT), Jan. 16-18, 2013; 4 pages. |
Blenkhorn et al.; “An Ultrasonic Mobility Device with Minimal Audio Feedback”; Center on Disabilities Technology and Persons with Disabilities Conference; Nov. 22, 1997; 5 pages. |
Mann et al.; “Blind Navigation with a Wearable Range Camera and Vibrotactile Helmet”; 19th ACM International Conference on Multimedia; Nov. 28, 2011; 4 pages. |
Shoval et al.; “The Navbelt—A Computerized Travel Aid for the Blind”; RESNA Conference, Jun. 12-17, 1993; 6 pages. |
Kumar et al.; “An Electronic Travel Aid for Navigation of Visually Impaired Persons”; Communications Systems and Networks (COMSNETS), 2011 Third International Conference; Jan. 2011; 5 pages. |
Pawar et al.; “Multitasking Stick for Indicating Safe Path to Visually Disable People”; IOSR Journal of Electronics and Communication Engineering (IOSR-JECE), vol. 10, Issue 3, Ver. II; May-Jun. 2015; 5 pages. |
Pagliarini et al.; “Robotic Art for Wearable”; Proceedings of EUROSIAM: European Conference for the Applied Mathematics and Informatics 2010; 10 pages. |
Greenberg et al.; “Finding Your Way: A Curriculum for Teaching and Using the Braillenote with Sendero GPS 2011”; California School for the Blind; 2011; 190 pages. |
Helal et al.; “Drishti: An Integrated Navigation System for Visually Impaired and Disabled”; Fifth International Symposium on Wearable Computers; Oct. 8-9, 2001; 8 pages. |
Parkes, Don; “Audio Tactile Systems for Designing and Learning Complex Environments as a Vision Impaired Person: Static and Dynamic Spatial Information Access”; EdTech-94 Proceedings; 1994; 8 pages. |
Zeng et al.; “Audio-Haptic Browser for a Geographical Information System”; ICCHP 2010, Part II, LNCS 6180; Jul. 14-16, 2010; 8 pages. |
AiZuhair et al.; “NFC Based Applications for Visually Impaired People—A Review”; IEEE International Conference on Multimedia and Expo Workshops (ICMEW), Jul. 14, 2014; 7 pages. |
Graf, Christian; “Verbally Annotated Tactile Maps—Challenges and Approaches”; Spatial Cognition VII, vol. 6222; Aug. 15-19, 2010; 16 pages. |
Hamid, Nazatul Naquiah Abd; “Facilitating Route Learning Using Interactive Audio-Tactile Maps for Blind and Visually Impaired People”; CHI 2013 Extended Abstracts; Apr. 27, 2013; 6 pages. |
Ramya, et al.; “Voice Assisted Embedded Navigation System for the Visually Impaired”; International Journal of Computer Applications; vol. 64, No. 13, Feb. 2013; 7 pages. |
Caperna et al.; “A Navigation and Object Location Device for the Blind”; Tech. rep. University of Maryland College Park; May 2009; 129 pages. |
Burbey et al.; “Human Information Processing with the Personal Memex”; ISE 5604 Fall 2005; Dec. 6, 2005; 88 pages. |
Ghiani, et al.; “Vibrotactile Feedback to Aid Blind Users of Mobile Guides”; Journal of Visual Languages and Computing 20; 2009; 13 pages. |
Guerrero et al.; “An Indoor Navigation System for the Visually Impaired”; Sensors vol. 12, Issue 6; Jun. 13, 2012; 23 pages. |
Nordin et al.; “Indoor Navigation and Localization for Visually Impaired People Using Weighted Topological Map”; Journal of Computer Science vol. 5, Issue 11; 2009; 7 pages. |
Hesch et al.; “Design and Analysis of a Portable Indoor Localization Aid for the Visually Impaired”; International Journal of Robotics Research; vol. 29; Issue 11; Sep. 2010; 15 pgs. |
Joseph et al.; “Visual Semantic Parameterization—to Enhance Blind User Perception for Indoor Navigation”; Multimedia and Expo Workshops (ICMEW), 2013 IEEE International Conference; Jul. 15, 2013; 7 pages. |
Katz et al; “NAVIG: Augmented Reality Guidance System for the Visually Impaired”; Virtual Reality (2012) vol. 16; 2012; 17 pages. |
Rodriguez et al.; “Assisting the Visually Impaired: Obstacle Detection and Warning System by Acoustic Feedback”; Sensors 2012; vol. 12; 21 pages. |
Treuillet; “Outdoor/Indoor Vision-Based Localization for Blind Pedestrian Navigation Assistance”; WSPC/Instruction File; May 23, 2010; 16 pages. |
Ran et al.; “Drishti: An Integrated Indoor/Outdoor Blind Navigation System and Service”; Proceeding PERCOM '04 Proceedings of the Second IEEE International Conference on Pervasive Computing and Communications (PerCom'04); 2004; 9 pages. |
Wang, et al.; “Camera-Based Signage Detection and Recognition for Blind Persons”; 13th International Conference (ICCHP) Part 2 Proceedings; Jul. 11-13, 2012; 9 pages. |
Krishna et al.; “A Systematic Requirements Analysis and Development of an Assistive Device to Enhance the Social Interaction of People Who are Blind or Visually Impaired”; Workshop on Computer Vision Applications for the Visually Impaired; Marseille, France; 2008; 12 pages. |
Lee et al.; “A Walking Guidance System for The Visually Impaired”; International Journal of Pattern Recognition and Artificial Intelligence; vol. 22; No. 6; 2008; 16 pages. |
Ward et al.; “Visual Experiences in the Blind Induced by an Auditory Sensory Substitution Device”; Journal of Consciousness and Cognition; Oct. 2009; 30 pages. |
Merino-Garcia, et al.; “A Head-Mounted Device for Recognizing Text in Natural Sciences”; CBDAR'11 Proceedings of the 4th International Conference on Camera-Based Document Analysis and Recognition; Sep. 22, 2011; 7 pages. |
Yi, Chucai; “Assistive Text Reading from Complex Background for Blind Persons”; CBDAR'11 Proceedings of the 4th International Conference on Camera-Based Document Analysis and Recognition; Sep. 22, 2011; 7 pages. |
Yang, et al.; “Towards Automatic Sign Translation”; The Interactive Systems Lab, Carnegie Mellon University; 2001; 5 pages. |
Meijer, Dr. Peter B.L.; “Mobile OCR, Face and Object Recognition for the Blind”; The vOICe, www.seeingwithsound.com/ocr.htm; Apr. 18, 2014; 7 pages. |
Omron; Optical Character Recognition Sensor User's Manual; 2012; 450 pages. |
Park, Sungwoo; “Voice Stick”; www.yankodesign.com/2008/08/21/voice-stick; Aug. 21, 2008; 4 pages. |
Rentschler et al.; “Intelligent Walkers for the Elderly: Performance and Safety Testing of VA-PAMAID Robotic Walker”; Department of Veterans Affairs Journal of Rehabilitation Research and Development; vol. 40, No. 5; Sep./Oct. 2013; 9pages. |
Science Daily; “Intelligent Walker Designed to Assist the Elderly and People Undergoing Medical Rehabilitation”; http://www.sciencedaily.com/releases/2008/11/081107072015.htm; Jul. 22, 2014; 4 pages. |
Glover et al.; “A Robotically-Augmented Walker for Older Adults”; Carnegie Mellon University, School of Computer Science; Aug. 1, 2003; 13 pages. |
OrCam; www.orcam.com; Jul. 22, 2014; 3 pages. |
Eccles, Lisa; “Smart Walker Detects Obstacles”; Electronic Design; http://electronicdesign.com/electromechanical/smart-walker-detects-obstacles; Aug. 20, 2001; 2 pages. |
Graft, Birgit; “An Adaptive Guidance System for Robotic Walking Aids”; Journal of Computing and Information Technology—CIT 17; 2009; 12 pages. |
Frizera et al.; “The Smart Walkers as Geriatric Assistive Device. The SIMBIOSIS Purpose”; Gerontechnology, vol. 7, No. 2; Jan. 30, 2008; 6 pages. |
Rodriquez-Losada et al.; “Guido, The Robotic Smart Walker for the Frail Visually Impaired”; IEEE International Conference on Robotics and Automation (ICRA); Apr. 18-22, 2005; 15 pages. |
Kayama et al.; “Outdoor Environment Recognition and Semi-Autonomous Mobile Vehicle for Supporting Mobility of the Elderly and Disabled People”; National Institute of Information and Communications Technology, vol. 54, No. 3; Aug. 2007; 11 pages. |
Kalra et al.; “A Braille Writing Tutor to Combat Illiteracy in Developing Communities”; Carnegie Mellon University Research Showcase, Robotics Institute; 2007; 10 pages. |
Blaze Engineering; “Visually Impaired Resource Guide: Assistive Technology for Students who use Braille”; Braille 'n Speak Manual; http://www.blaize.com; Nov. 17, 2014; 5 pages. |
AppleVis; An Introduction to Braille Screen Input on iOS 8; http://www.applevis.com/guides/braille-ios/introduction-braille-screen-input-ios-8, Nov. 16, 2014; 7 pages. |
Dias et al.; “Enhancing an Automated Braille Writing Tutor”; IEEE/RSJ International Conference on Intelligent Robots and Systems; Oct. 11-15 2009; 7 pages. |
D'Andrea, Frances Mary; “More than a Perkins Brailler: A Review of the Mountbatten Brailler, Part 1”; AFB AccessWorld Magazine; vol. 6, No. 1, Jan. 2005; 9 pages. |
Trinh et al.; “Phoneme-based Predictive Text Entry Interface”; Proceedings of the 16th International ACM SIGACCESS Conference on Computers & Accessibility; Oct. 2014; 2 pgs. |
Merri et al.; “The Instruments for a Blind Teacher of English: The challenge of the board”; European Journal of Psychology of Education, vol. 20, No. 4 (Dec. 2005), 15 pages. |
Kirinic et al.; “Computers in Education of Children with Intellectual and Related Developmental Disorders”; International Journal of Emerging Technologies in Learning, vol. 5, 2010, 5 pages. |
Campos et al.; “Design and Evaluation of a Spoken-Feedback Keyboard”; Department of Information Systems and Computer Science, INESC-ID/IST/Universidade Tecnica de Lisboa, Jul. 2004; 6 pages. |
Ebay; Matin (Made in Korea) Neoprene Canon DSLR Camera Curved Neck Strap #6782; http://www.ebay.com/itm/MATIN-Made-in-Korea-Neoprene-Canon-DSLR-Camera-Curved-Neck-Strap-6782-/281608526018?hash=item41912d18c2:g:˜pMAAOSwe-FU6zDa ; 4 pages. |
Newegg; Motorola S10-HD Bluetooth Stereo Headphone w/ Comfortable Sweat Proof Design; http://www.newegg.com/Product/Product.aspx?Item=9SIA0NW2G39901&Tpk=9sia0nw2g39901; 4 pages. |
Newegg; Motorola Behind the Neck Stereo Bluetooth Headphone Black/Red Bulk (S9)—OEM; http://www.newegg.com/Product/Product.aspx?Item=N82E16875982212&Tpk=n82e16875982212; 3 pages. |
Bharathi et al.; “Effective Navigation for Visually Impaired by Wearable Obstacle Avoidance System;” 2012 International Conference on Computing, Electronics and Electrical Technologies(ICCEET); pp. 956-958; 2012. |
Pawar et al.; “Review Paper on Multitasking Stick for Guiding Safe Path for Visually Disable People;” IJPRET; vol. 3, No. 9; pp. 929-936; 2015. |
Ram et al.; “The People Sensor: A Mobility Aid for the Visually Impaired;” 2012 16th International Symposium on Wearable Computers; pp. 166-167; 2012. |
Singhal; “The Development of an Intelligent Aid for Blind and Old People;” Emerging Trends and Applications in Computer Science (ICETACS), 2013 1st International Conference; pp. 182-185; Sep. 13, 2013. |
Aggarwal et al.; “All-in-One Companion for Visually Impaired;” International Journal of Computer Applications; vol. 79, No. 14; pp. 37-40; Oct. 2013. |
“Light Detector” EveryWare Technologies; 2 pages; Jun. 18, 2016. |
Arati et al. “Object Recognition in Mobile Phone Application for Visually Impaired Users;” IOSR Journal of Computer Engineering (IOSR-JCE); vol. 17, Impaired No. 1; pp. 30-33; Jan. 2015. |
Yabu et al.; “Development of a Wearable Haptic Tactile Interface as an Aid for the Hearing and/or Visually Impaired;” NTUT Education of Disabilities; vol. 13; pp. 5-12; 2015. |
Mau et al.; “BlindAid: An Electronic Travel Aid for the Blind;” The Robotics Institute Carnegie Mellon University; 27 pages; May 2008. |
Shidujaman et al.; “Design and navigation Prospective for Wireless Power Transmission Robot;” IEEE; Jun. 2015. |
Wu et al. “Fusing Multi-Modal Features for Gesture Recognition”, Proceedings of the 15th ACM on International Conference on Multimodal Interaction, Dec. 9, 2013, ACM, pp. 453-459. |
Pitsikalis et al. “Multimodal Gesture Recognition via Multiple Hypotheses Rescoring”, Journal of Machine Learning Research, Feb. 2015, pp. 255-284. |
Shen et al. “Walkie-Markie: Indoor Pathway Mapping Made Easy” 10th USENIX Symposium on Networked Systems Design and Implementation (NSDI'13); pp. 85-98, 2013. |
Tu et al. “Crowdsourced Routing II D2.6” 34 pages; 2012. |
De Choudhury et al. “Automatic Construction of Travel Itineraries Using Social Breadcrumbs” pp. 35-44; Jun. 2010. |
Number | Date | Country | |
---|---|---|---|
20160232817 A1 | Aug 2016 | US |