This relates generally to electronic wearable devices, and more particularly, to an electronic wearable device incorporating an ambient sound event detection and response system.
Electronic wearable devices are gaining popularity. They come in many shapes and forms. For example, headsets are designed to capture and play audio including voice calls. However, headsets and other existing forms of electronic wearable devices are not made to be customized especially as it pertains to their aesthetic look. The inability to customize the look of an electronic wearable makes it difficult to please the user's taste in all use cases and situations.
While many existing electronic devices can be used for making emergency calls, none provides active emergency monitoring and response based on ambient sound events. That is, existing electronic devices require a user-initiated action (e.g., pressing a button, dialing a number, providing a verbal confirmation) to activate their emergency notification capabilities. This is not ideal if the user is already unconscious or incapacitated to perform the action.
In one aspect, this disclosure relates to a wearable electronic device with an interchangeable faceplate. The wearable electronic device can function as a speakerphone. The wearable electronic device can incorporate an acoustic reflector for improving speaker sound quality.
In another aspect, this disclosure relates to an ambient sound event detection and response system. The system is capable of intelligently activating an emergency response in response to detecting and recognizing certain sound event with no or minimum user input.
In the following description of preferred embodiments, reference is made to the accompanying drawings which form a part hereof, and in which it is shown by way of illustration specific embodiments, which can be practiced. It is to be understood that other embodiments can be used and structural changes can be made without departing from the scope of the embodiments of this disclosure.
This disclosure generally relates to electronic wearable devices (or “devices” as referred to hereinafter). In one embodiment, as illustrated in
Other mechanisms can also be used. As example, the base 104 can be magnetic and the faceplate 106 can be metal (or vice versa) and the two can attach to each other magnetically. In one embodiment, the base (or the faceplate) may include a small magnetic ring along its outer edge. Alternatively, a magnet can be positioned in the center of the faceplate 106. As other examples, the faceplate 106 can be snapped or twisted on or off the base 104.
The base 104 and detachable faceplate can be made of waterproof material.
The base 104 includes the electronic components that provide various functions such as speakerphone functions. The faceplate 106 primarily serves for aesthetic purpose. In the embodiment of
As mentioned above, the base 104 of the electronic wearable device 100 can include electronic components. In the embodiment illustrated in
As illustrated in the figure, the front end 404, to which the faceplate (not shown in
In the embodiment illustrated in
The same design with the reflector 408 placed in front of the front end of the speaker 404 enclosure also works when there are listeners beside the wearer of the device. These other listeners can be to the side of the speaker enclosure 402, in which case, the speaker is also not pointed in the direction of these listeners. Again, the reflector 408, with its curved sound reflecting surface facing the front end of the speaker 404, can redirect the sound to the listeners instead of allowing the sound to dissipate away from them. In other words, the illustrated design allows for a sealed speaker 402 enclosure that may produce higher fidelity sound while redirected the sound with the reflector 408.
Although in the embodiments discussed above, the electronic wearable devices mimic common necklaces, it should be understood that the device can also be in the form of a shirt clip, tie clip, wrist band/watch, ankle band, head band, and the like that incorporates the interchangeable faceplates and, optionally, a sound reflector to direct sound from a speaker inside the device.
In another aspect of the disclosure, a sound event detection and response system is disclosed. Various embodiments of the sound event detection and response system can detect and recognize certain types of sound and, in response, initialize an emergency response. For example, an embodiment of the system can detect a dog barking and automatically ask a user if there is a problem. If the user fails to respond within a certain time frame, the system can automatically take one or more actions such as calling emergency services.
When in use, the microphone 506 can capture external sound event 520 and audible input from a user 522. The sound event 520 can be any sound such as a dog barking, an alarm going off, sound of a car crash, a gun shot sound, etc. Audible input from a user can be a verbal response/instruction or any sound that is identifiable as made by a human. After the sound event 520 is captured by microphone 506, it is transmitted to the microprocessor 504, which analyzes the sound event to determine whether sound event 520 matches one of the pre-stored sounds. Microprocessor 503 can be any type of computer processor that is configured to receive signals and process the signals to determine a plurality of conditions of the operation of device 502. Processor 504 may also be configured to generate and transmit command signals, via I/O interface 509, to actuate local components such as microphones 506, 507, speaker 512. Processor 504 can also communicate with external devices such as a mobile phone 532 and/or cloud 534 over a network 550.
In one embodiment, the pre-stored sounds can include the different sounds from a variety of emergencies such as but not limited to car alarms, dogs barking sound, fire/smoke alarms, screaming, and gun shots. The pre-stored sounds can be stored in a database on a storage 505 accessible by the processor 504. The storage device 505 can be local to device 502 as in this embodiment or on a remote device connected to device 502 via a network as in other embodiments. For example, the pre-stored sounds can be stored on a cloud server. In one embodiment, different sounding alarms can be stored in the database with machine learning analysis to help detect variations of the sound frequency among other differences.
Storage device 505 can be configured to store one or more computer programs that may be executed by processor 504 to perform functions of the electronic device 502. For example, storage device 505 can be configured to process sound events, communicate with remote emergency service servers, and/or processing user input. Storage device 505 can be further configured to store data used by the processor 504. Storage device 505 can be non-transitory computer-readable medium storing instructions which, when executed, cause one or more processors to perform one or more methods, as discussed below. The computer-readable medium can include volatile or non-volatile, magnetic, semiconductor, tape, optical, removable, non-removable, or other types of computer-readable medium or computer-readable storage devices. The computer-readable medium can have computer instructions stored thereon, as disclosed. In some embodiments, the computer-readable medium may be a disc or a flash drive having the computer instructions stored thereon.
As illustrated in the embodiment of
In some embodiments, depending on the sound event captured, the disclosed system can provide different responses. For example, if the sound event is a car alarm going off and the user is unresponsive to a prompt, the device 502 can request emergency services. If the sound event corresponds to a car crash, the device 502 can send a message to an emergency contact of the user and/or call emergency services without first prompting the user for confirmation.
In some embodiments, the user can set which types of sound events to trigger a prompt or automatic response without prompt as well as what should be the response or action triggered, whether it is calling emergency services or contacting the user's emergency contact.
In some embodiments, the system may use correlate local sounds to improve its sound event recognition. For example, in the event of an earthquake, the device 502 can verify public information on recent earthquakes to see if one has occurred in close vicinity of device 502 when device 502 captures a sound event that indicates an earthquake. If verified, the system can prompt to see if the user of device 502 requires assistance or emergency services. As another example, the device 502 can verify if there is a report of an ongoing crime via police channels near the location when the sound of gun shots is captured. If verified, a request for emergency service can be sent automatically. In the examples provided above, the device 502 may also include a location determination component such as a GPS.
In some embodiments, the system can incorporate algorithms to help distinguish differences between different sound patterns. For example, it can recognize the sound of screaming laugher versus someone who is screaming for help and requires assistance. In some embodiments, the system can use machine learning (i.e., artificial intelligence) to learn new sound event that should trigger a response from the system. The machine learning can be supervised or unsupervised. In one example, if the user declines to request emergency assistance when prompted by the system, the system can automatically disassociate the sound event from an emergency. In some embodiments, this happens only after the user has repeatedly declined assistance in response to the same triggering sound event. Once a sound event is disassociated with an emergency, the system will no longer prompt the user or initiate a request for emergency assistance when the sound event is captured again. In contrast, if the user has requested emergency assistance when prompted after a sound event is detected, the system can automatically increase a confidence score being associated with the sound event. When the confidence score reaches a threshold, the system can automatically request emergency service without first prompting the user for confirmation.
Referring again to
All of the methods and tasks described herein may be performed and fully automated by a computer system. The computer system may, in some cases, include multiple distinct computers or computing devices (e.g., physical servers, workstations, storage arrays, cloud computing resources, and mobile devices, etc.) that communicate and interoperate over a network to perform the described functions. Each such computing device typically includes a processor (or multiple processors) that executes program instructions or modules stored in a memory or other non-transitory computer-readable storage medium or device (e.g., solid state storage devices, disk drives, etc.). The various functions disclosed herein may be embodied in such program instructions or may be implemented in application-specific circuitry (e.g., ASICs or FPGAs) of the computer system. Where the computer system includes multiple computing devices, these devices may, but need not, be co-located. The results of the disclosed methods and tasks may be persistently stored by transforming physical storage devices, such as solid-state memory chips or magnetic disks, into a different state. In some embodiments, the computer system may be a cloud-based computing system whose processing resources are shared by multiple distinct business entities or other users.
Depending on the embodiment, certain acts, events, or functions of any of the processes or algorithms described herein can be performed in a different sequence, can be added, merged, or left out altogether (e.g., not all described operations or events are necessary for the practice of the algorithm). Moreover, in certain embodiments, operations or events can be performed concurrently, e.g., through multi-threaded processing, interrupt processing, or multiple processors or processor cores or on other parallel architectures, rather than sequentially.
The elements of a method, process, routine, or algorithm described in connection with the embodiments disclosed herein can be embodied directly in hardware, in a software module executed by a processor device, or in a combination of the two. A software module can reside in RAM, flash memory, ROM, EPROM, EEPROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of a non-transitory computer-readable storage medium. An exemplary storage medium can be coupled to the processor device such that the processor device can read information from, and write information to, the storage medium. In the alternative, the storage medium can be integral to the processor device. The processor device and the storage medium can reside in an ASIC. The ASIC can reside in a user terminal. In the alternative, the processor device and the storage medium can reside as discrete components in a user terminal.
Although embodiments of this disclosure have been fully described with reference to the accompanying drawings, it is to be noted that various changes and modifications will become apparent to those skilled in the art. Such changes and modifications are to be understood as being included within the scope of embodiments of this disclosure as defined by the appended claims.
This application is a continuation-in-part and claims the priority of U.S. patent application Ser. No. 17/477,819, filed on Sep. 17, 2021, which claims priority to U.S. Provisional Application No. 63/080,954, filed Sep. 21, 2020, the entirety of each application is hereby incorporated by reference.
Number | Date | Country | |
---|---|---|---|
63080954 | Sep 2020 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 17477819 | Sep 2021 | US |
Child | 17950965 | US |