This invention relates to wearable reminder or gesture control devices.
Alzheimer's an irreversible, progressive brain disease affects 5.1 million Americans. Alzheimer's patients experience a gradual decline in memory and thinking that can make it difficult for them to complete familiar tasks. To overcome this challenge, a reminder device, which can be programmed remotely by a caregiver, could prompt those with Alzheimer's with timely notifications. Likewise, autism affects 1 in 68 children in the US, and about 1% of the world's population has autism. By using a reminder device, a broad spectrum of autism patients can be assisted through the aforementioned technology.
It has been shown that autistic students respond to commands better when they are announced in the same tone of voice. As a result, instead of using live prompting, which contains voice inflections, students who utilize our device will be able to depend on a more tone-regulated notification system. Not only will the technology of this invention diminish the time and energy spent by instructors to relay repeated information to their students, but it will also elicit a better response from its constituents.
Although reminder devices exist in the market, the ability to customize such devices is limited. Most clock applications can be set to specific times to provide reminders; however, these reminders mostly come in the form of buzzers or non-customizable, non-verbal commands. For clocks that have a voice-controlled reminder feature, most are expensive and have limited customizability. There are software applications available on mobile devices that provide a reminder functionality; however, the high cost of the mobile device or the inability to use a mobile device makes these applications inaccessible to many categories of individuals including e.g. the autistic, blind, and elderly.
For setting calendar-based alarms, existing reminder devices either use displays or use speech recognition software. Display-based devices can be touchscreen or keypad based. The problem with these devices is that they need to be large so that the user can read the screen or use the keypad with ease. In addition, these devices are not suitable for people who are blind or visually impaired. Furthermore, the display itself can have significant power consumption.
Other types of reminder devices use speech recognition for extracting the time and contextual information from the commands. Speech recognition requires a large amount of memory and computational power, which increases the size and power consumption of the device. In addition, noisy environments may make it difficult to perform recognize commands accurately.
The present invention advances the art by providing a reminder device using gestures without the use of displays, keypads or speech recognition software, therewith significantly reducing size, storage requirements and power consumption.
The present invention provides a reminder or gesture control device that is a low power, cost-effective wearable device that provides time-based audio reminders and uses a method for improved recognition of gestures. It has a small size and can be clipped near the shoulder or worn elsewhere. Unlike other devices described in art, this device does not have a keypad or display for user input nor does it perform speech recognition. The device is programmed and used solely through gestures. This provides the added advantage of a very small size, which is important for wearable devices. Hence the device is suitable for use by the blind, persons with a speech impediment, visually impaired persons, and others who are unable to read small fonts.
The device can be programmed in one of two ways. In the first method, the device is programmed through contactless operations such as moving the hand in front of the device in different directions (UP, DOWN, LEFT, RIGHT, NEAR, FAR) that are detected by a gesture sensor on the device. The device produces audio output that guides the user through the process of programming and using the device. For convenience, the device prompts the user on the appropriate gesture for the next command, thereby relieving the user from having to learn a large number of gestures for programming the device. In the second method, the user traces shapes (e.g. letters or numbers) while holding a mobile device with inertial sensors (such as a digital pen or wand), and the motion trajectory information of the number or shape is transmitted wirelessly to the device for further processing and shape recognition. The device is programmed to automatically process data from either the digital pen or the gesture sensor depending on whichever one is used by the user. This dual programming option allows the user to select the human-computer interaction (HCl) technique that is more convenient.
The device can automatically connect to a wireless data network and perform various operations such as synchronization with a time server, creating a graphical user interface displaying calendar information to a user on a computer or mobile device, and storing compliance information on a local machine or remote server.
In one embodiment, the reminder device is a wearable by a user. The device has a gesture sensor for detecting a first type of gestures. The gesture sensor is capable of recognizing gestures made by the user's hand. The device also has one or more inertial sensors for detecting a second type of gestures (e.g. a digital pen or wand, accelerometer or gyroscope). Each of the inertial sensors is capable of recognizing gestures made by the user handling the inertial sensor. An inertial sensor is wirelessly connected with the device.
The device further has a gesture recognition module executable by a processor on the device. The gesture recognition module is configured to receive the sensor data for the first and second types of gestures. The gesture recognition module is programmed to recognize:
An example of (i) is UP/DOWN, LEFT/RIGHT and NEAR/FAR defining 3 sets with 2 types per set.
The device further has a state machine executable by the same processor or a different processor on the device. The state machine is configured to have a plurality of states programmed to receive the recognized gestures by the gesture recognition module. The state machine changes its state based on the recognized gestures. Depending on the state machine's current state audio data is output to the user (j) as a result of the user's first and second type of gestures communicated by the gesture and inertial sensors or (jj) as a result of a specific state of the state machine. The device is particularly useful as a calendaring system.
The state machine wirelessly outputs data like text, images, video and/or audio to a separate device or display.
Embodiments of the reminder device could help the disabled, ill and elderly stay focused on tasks and complete desired activities providing them with greater independence. It could also help researchers, entrepreneurs and social workers who are either studying this group, building assistive technologies or caring for these individuals. Specifically, researchers would be able to better understand the group's characteristics and behaviors, entrepreneurs could create products that would ultimately benefit clients, and social workers could manage the group more effectively. To determine how large of an impact reminder systems can make for a group of choice, scholars could garner statistical records. Improvements based on this data could also be made with regards to the creation of a more pragmatic and efficient reminder device. The device would be affordable, is low power, customizable and capable of connecting to the Internet. As a result, our device can benefit millions of people, especially those who are disabled, ill or elderly, around the world by promoting independence and enabling them to lead more productive lives.
The gesture sensor 101 is used for detecting a specific set of gestures and is located on the top of the device (
The user can program the device either through the gesture sensor APDS-9960 or by using a mobile device (such as a digital pen or wand) that transmits motion trajectory information of a traced number or shape to the device for further processing. The gesture sensor can recognize several simple motions such as UP, DOWN, LEFT, NEAR, and FAR. To start recording an alarm, a user can perform a gesture such as FAR. This triggers an interrupt on the microcontroller and it then prompts the user to enter a date for the alarm. Alternately, if the user is using a digital pen, the RFID receiver on the device triggers an interrupt, and the microcontroller stores the motion trajectory information received by the RFID receiver (nRF24L01+) in memory for further analysis and number/character recognition. Otherwise, if the user makes an UP or DOWN gesture on the gesture sensor, the device starts to count up or down, starting from the current date, until the user makes a selection using a FAR gesture when the appropriate date is reached. This process is continued until the user has entered all the parameters (such as date and time) for the alarm. The user is then prompted to start recording the voice memo, and the audio data is stored in the SD-card. To conserve power, peripheral devices such as the amplifier and speaker are turned off when they are not used. The RFID receiver is also placed in deep sleep and is only woken up when there is data to be received.
Embodiments of the invention could be varied or complemented by:
Embodiments of the invention can be structurally enabled as one or more chips or processors in a wearable device (e.g. at the wrist, hip or where the user prefers) executing computer programs, methods or code defining the state machine, the sensory recognition and detection, the output data and/or the objective/goals defined for the wearable device. The embodiments could be envisioned as devices, methods, systems, computer programs and/or products.
Embodiment of the invention can further be varied as follows, as shown in
In case of (a), when the device fails to recognize the input letter, it prompts the user to reenter the gesture through an alternate method. For example, the device informs the user the gesture could not be recognized and gives instructions on how to make a selection using the first gesture sensor that accepts device commands. It then speaks out numbers/letters/symbols and the user can perform a gesture, such as “NEAR”, when the correct number or letter is spoken to select it. Other gestures, such as move up (move down), can be used to speed up (slow down) the rate at which the numbers or letters are spoken out. As another example, the device may speak out the gestures that are nearest matches to the gesture performed. Suppose that the user traced the number 2, and the closest matches are 7, 1, L, and 2. The device announces each of the closest matches, and the user can make a gesture in the air, such as “NEAR”, to select the desired number/letter/symbol when it is announced.
In case of (b), when the gesture recognition is incorrect, the user can use the first type of sensor to enter a device command gesture, such as by the gesture “FAR”, to indicate the gesture has to be cancelled. The device asks the user to confirm that the gesture should be cancelled and then prompts the user to reenter it.
In other words, the reminder/control device can use an alternate method for gesture identification using the first gesture sensor if the second gesture sensor does not provide correct identification of the gesture.
The first gesture sensor can be infrared, photoelectric, capacitive, electric-field based, or RFID tag array.
The second gesture sensor may be a camera that has the capability to recognize the gesture from a distance or an inertial sensor.
In some embodiments, the invention can be used to support the control of electronic devices with gestures. We will refer to the Reminder Device as the Gesture Control Device when it is used in this embodiment. Cell phones and watches can be used to control TV and home appliances today through apps with speech recognition or the graphical user interface. The Gesture Control Device may be located in a TV remote control, cell phone, watch, TV or elsewhere to further support this functionality via gestures The first gesture sensor on the Gesture Control Device detects device commands. The second gesture sensor is the external sensor that detects the user's gesture. For example it could be a TV camera o or an inertial sensor held in the user's hand. When the user enters a gesture using the second gesture sensor, and the gesture is not recognized or is recognized incorrectly by the Control Device, the latter provides an alternate method for the user to reenter the gesture using the first gesture sensor and provides audio feedback to guide the user through the process. The user may use gestures to enter a channel number for viewing, search for a particular show by entering its name with gestures, record a show, and more. For each of these operations, the Gesture Control Device is used to improve the gesture recognition.
The state machine of this device is modified with new states depending on the application. For example, consider a Gesture Control device in a smart watch or phone to further support the control of a TV using gestures. The user can use the camera in a TV as the external sensor to enter the second type of gestures, such as the name of a show to watch. Data from each gesture will be transmitted from the TV to the Gesture Control Device in the smart watch, and spoken out to the user. The Gesture Control Device will provide audio prompts of the gestures made by the user and directions to complete the process of entering the name of the show via gestures. The gesture information is then provided to the smart watch app by the Gesture Control Device and the appropriate command is then sent from the smart watch to the TV to switch to the new show In another embodiment of the invention, the user may use gestures to trace commands such as “Switch channel to 1”, “increase volume”, and “off” that are detected by the second gesture sensor (TV camera) and used by the smart watch or cell phone and the Gesture Control Device to control the TV.
In another embodiment, the Gesture Control Device is used in a smart watch or cell phone to support the control of a smart home system using gestures. The smart watch may perform operations in the home, such as turning appliances on or off through the graphical user interface, and the Gesture Control Device can enhance this functionality by supporting the control operations via gestures. Suppose that the user wants to turn off a light. The user moves a device with an inertial sensor (second type of sensor) and traces the command “OFF”. This gesture information is transmitted to the Gesture Control Device. The Gesture Control Device could then communicate with the smart home controller application in the smart watch, and provide audio feedback to the user. For example, the audio feedback could be provided to the user as follows:
After the user performs the desired gesture control gesture by moving the hand in the desired manner on the first type of sensor to select the desired operation, this information is transmitted by the smart watch to the smart home controller, which can turn off the appliance. In another embodiment of the device, the visual feedback to the user can be projected from the device onto a separate display, say, the user's arm, clothing, or a wall. In another embodiment, the functionality of the Gesture Control Device can be implemented through a software application in a smart watch, cell phone or other device.
This application is a continuation-in-part of U.S. patent application Ser. No. 15/092,212 filed Apr. 6, 2016, which is incorporated herein by reference. U.S. patent application Ser. No. 15/092,212 claims priority from U.S. Provisional Patent Application 62/144,098 filed Apr. 7, 2015, which is incorporated herein by reference.
Number | Date | Country | |
---|---|---|---|
62144098 | Apr 2015 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 15092212 | Apr 2016 | US |
Child | 16279228 | US |