Hearing loss, which may be due to many different causes, is generally of two types: conductive and sensorineural. Sensorineural hearing loss is due to the absence or destruction of the hair cells in the cochlea that transduce sound signals into nerve impulses. Various hearing prostheses are commercially available to provide individuals suffering from sensorineural hearing loss with the ability to perceive sound. One example of a hearing prosthesis is a cochlear implant.
Conductive hearing loss occurs when the normal mechanical pathways that provide sound to hair cells in the cochlea are impeded, for example, by damage to the ossicular chain or the ear canal. Individuals suffering from conductive hearing loss may retain some form of residual hearing because the hair cells in the cochlea may remain undamaged.
Individuals suffering from hearing loss typically receive an acoustic hearing aid. Conventional hearing aids rely on principles of air conduction to transmit acoustic signals to the cochlea. In particular, a hearing aid typically uses an arrangement positioned in the recipient's ear canal or on the outer ear to amplify a sound received by the outer ear of the recipient. This amplified sound reaches the cochlea causing motion of the perilymph and stimulation of the auditory nerve. Cases of conductive hearing loss typically are treated by means of bone conduction hearing aids. In contrast to conventional hearing aids, these devices use a mechanical actuator that is coupled to the skull bone to apply the amplified sound.
In contrast to hearing aids, which rely primarily on the principles of air conduction, certain types of hearing prostheses commonly referred to as cochlear implants convert a received sound into electrical stimulation. The electrical stimulation is applied to the cochlea, which results in the perception of the received sound.
In accordance with an exemplary embodiment, there is a system, comprising: an central processor apparatus configured to receive input from a plurality of sound capture devices, wherein the central processor apparatus is configured to collectively evaluate the input from the plurality of sound capture devices to identify at least one spatial location that is more conducive to hearing with a hearing prosthesis relative to another spatial location
In accordance with another exemplary embodiment, there is a method, comprising: simultaneously capturing sound at a plurality of respective local globally spatially separated locations utilizing respectively located separate sound capture devices; evaluating the captured sound; and developing one or more acoustic landmarks based on the captured sound.
In accordance with another exemplary embodiment, there is a method, comprising: capturing sound at a plurality of respectively effectively spatially separated locations of a locality; evaluating the captured sound; and developing a sound field of the locality.
In accordance with another exemplary embodiment, there is a method comprising: receiving data indicative of sound captured at a plurality of spatially separated locations in a closed environment, wherein the enclosed environment has an acoustic environment such that a given sound has different properties at the different locations owing to the acoustic environment; and evaluating the data to determine at least one spatially linked acoustic related data point based on one or more hearing related features of a specific hearing impaired individual.
Embodiments are described below with reference to the attached drawings, in which:
In view of the above, it is to be understood that at least some embodiments detailed herein and/or variations thereof are directed towards a body-worn sensory supplement medical device (e.g., the hearing prosthesis of
The recipient has an outer ear 101, a middle ear 105, and an inner ear 107. Components of outer ear 101, middle ear 105, and inner ear 107 are described below, followed by a description of cochlear implant 100.
In a fully functional ear, outer ear 101 comprises an auricle 110 and an ear canal 102. An acoustic pressure or sound wave 103 is collected by auricle 110 and channeled into and through ear canal 102. Disposed across the distal end of ear channel 102 is a tympanic membrane 104 which vibrates in response to sound wave 103. This vibration is coupled to oval window or fenestra ovalis 112 through three bones of middle ear 105, collectively referred to as the ossicles 106 and comprising the malleus 108, the incus 109, and the stapes 111. Bones 108, 109, and 111 of middle ear 105 serve to filter and amplify sound wave 103, causing oval window 112 to articulate, or vibrate in response to vibration of tympanic membrane 104. This vibration sets up waves of fluid motion of the perilymph within cochlea 140. Such fluid motion, in turn, activates tiny hair cells (not shown) inside of cochlea 140. Activation of the hair cells causes appropriate nerve impulses to be generated and transferred through the spiral ganglion cells (not shown) and auditory nerve 114 to the brain (also not shown) where they are perceived as sound.
As shown, cochlear implant 100 comprises one or more components which are temporarily or permanently implanted in the recipient. Cochlear implant 100 is shown in
In the illustrative arrangement of
Cochlear implant 100 comprises an internal energy transfer assembly 132 which can be positioned in a recess of the temporal bone adjacent auricle 110 of the recipient. As detailed below, internal energy transfer assembly 132 is a component of the transcutaneous energy transfer link and receives power and/or data from external device 142. In the illustrative embodiment, the energy transfer link comprises an inductive RF link, and internal energy transfer assembly 132 comprises a primary internal coil 136. Internal coil 136 is typically a wire antenna coil comprised of multiple turns of electrically insulated single-strand or multi-strand platinum or gold wire.
Cochlear implant 100 further comprises a main implantable component 120 and an elongate electrode assembly 118. In some embodiments, internal energy transfer assembly 132 and main implantable component 120 are hermetically sealed within a biocompatible housing. In some embodiments, main implantable component 120 includes an implantable microphone assembly (not shown) and a sound processing unit (not shown) to convert the sound signals received by the implantable microphone in internal energy transfer assembly 132 to data signals. That said, in some alternative embodiments, the implantable microphone assembly can be located in a separate implantable component (e.g., that has its own housing assembly, etc.) that is in signal communication with the main implantable component 120 (e.g., via leads or the like between the separate implantable component and the main implantable component 120). In at least some embodiments, the teachings detailed herein and/or variations thereof can be utilized with any type of implantable microphone arrangement.
Main implantable component 120 further includes a stimulator unit (also not shown) which generates electrical stimulation signals based on the data signals. The electrical stimulation signals are delivered to the recipient via elongate electrode assembly 118.
Elongate electrode assembly 118 has a proximal end connected to main implantable component 120, and a distal end implanted in cochlea 140. Electrode assembly 118 extends from main implantable component 120 to cochlea 140 through mastoid bone 119. In some embodiments electrode assembly 118 may be implanted at least in basal region 116, and sometimes further. For example, electrode assembly 118 may extend towards apical end of cochlea 140, referred to as cochlea apex 134. In certain circumstances, electrode assembly 118 may be inserted into cochlea 140 via a cochleostomy 122. In other circumstances, a cochleostomy may be formed through round window 121, oval window 112, the promontory 123 or through an apical turn 147 of cochlea 140.
Electrode assembly 118 comprises a longitudinally aligned and distally extending array 146 of electrodes 148, disposed along a length thereof. As noted, a stimulator unit generates stimulation signals which are applied by electrodes 148 to cochlea 140, thereby stimulating auditory nerve 114.
It is noted that while the embodiments detailed herein will be often described in terms of utilization of a cochlear implant, alternative embodiments can be utilized in other types of hearing prostheses, such as by way of example only and not by way of limitation, bone conduction devices (percutaneous, active transcutaneous and/or passive transcutaneous), Direct Acoustic Cochlear Implants (DACI), and conventional hearing aids. Accordingly, any disclosure herein with regard to one of these types of hearing prostheses corresponds to a disclosure of another of these types of hearing prostheses or any other prosthetic medical device for that matter, unless otherwise specified, or unless the disclosure thereof is incompatible with a given hearing prosthesis based on the current state of technology.
In view of the above, it is to be understood that in an exemplary embodiment, there is a system, comprising a central processor apparatus configured to receive input from a plurality of sound capture devices, such as, for example, the smartphones 240 and/or the microphones 440 detailed above, and/or from microphones or other sound capture devices of a hearing prosthesis and/or someone else's hearing prosthesis (in an exemplary embalmment, one or more of the sound capture devices are respective sound capture devices of hearing prostheses of people in the area, where the hearing prostheses are in signal communication with the central processor (directly or indirectly, such as, with respect to the latter, through a smart phone, or a cell phone, etc.) such an embodiment can also enable a dynamic system where the microphones move around from location to location, which can also be the case with, for example, the smart phones). As noted above, the input can be the raw signal/modified signal (e.g., amplified and/or some features taken out/compression techniques can be applied thereto) from the microphones of the sound capture devices. Thus, in an exemplary embodiment, there is a system that includes microphones that are configured to output respective signals indicative of respective captured sounds. The system is further configured to provide the respective signals and/or modified signals based on the respective signals to the central processor apparatus as input from the plurality of sound capture devices. Conversely, in some embodiments, the input can be a signal that is based on the sound captured by the microphones, but the signal is a data signal that results from the processing or otherwise the evaluations of the microphones, which data signal is provided to the central processor apparatus 3401. In this exemplary embodiment, the central processor apparatus is configured to collectively evaluate the input from the plurality of sound capture devices.
In an exemplary embodiment, the processor apparatus includes a processor, which processor of the processor apparatus can be a standard microprocessor supported by software or firmware or the like that is programmed to evaluate the signal received from the sound capture device(s). By way of example only and not by way of limitation, in an exemplary embodiment, the microprocessor can have access to lookup tables or the like having data associated with spectral analysis of a given sound signal, by way of example, and can compare features of the input signal and compare those features to features in the lookup table, and, via related data in the lookup table associated with those features, make a determination about the input signal, and thus make a determination related to the sound and/or classifying the sound. In an exemplary embodiment, the processor is a processor of a sound analyzer. The sound analyzer can be FFT based or based on another principle of operation. The sound analyzer can be a standard sound analyzer available on smart phones or the like. Sound analyzer can be a standard audio analyzer. The processor can be part of a sound wave analyzer. Moreover, it is specifically noted that while the embodiment of the figures above present the processor apparatus 3401, and thus the processor thereof, is a device that is remote from the hearing prosthesis and/or the smart phones, etc., the processor can instead be part of one of the devices of the hearing prosthesis or the portable electronics device (e.g., smart phone, or any other device that can have utilitarian value with respect to implementing the teachings detailed herein). Still, consistent with the teachings above, it is noted that in some exemplary embodiments, the processor can be remote from the prosthesis and the smart phones or other portable consumer electronic devices.
By way of example only and not by way of limitation, in an exemplary embodiment, any one or more of the devices of systems detailed herein can be in signal communication via Bluetooth technology or other RF signal communication systems with each other and/or with a remote server that is linked, via, for example, the Internet or the like, to a remote processor. Indeed, in at least some exemplary embodiments, the processor apparatus 3401 is a device that is entirely remote from the other components of the system. That said, in an exemplary embodiment, the processor apparatus 3401 is a device that has components that are spatially located at different locations in a global manner, which components can be in signal communication with each other via the Internet or the like. In an exemplary embodiment, the signals received from the sound capture devices can be provided via the Internet to this remote processor, whereupon the signal is analyzed, and then, via the Internet, the signal indicative of an instruction related to data related to a recipient of the hearing prostheses can be provided to the device at issue, such that the device can output such. Note also that in an exemplary embodiment, the information received from the remote processor can simply be the results of the analysis, whereupon the processor can analyze the results of the analysis, and identify information that will then be outputted as will be described in greater detail below. It is noted that the term “processor” as utilized herein, can correspond to a plurality of processors linked together, as well as one single processor.
In an exemplary embodiment, the system includes a sound analyzer in general, and, in some embodiments, a speech analyzer in particular, such as by way of example only and not by way of limitation, one that is configured to perform spectrographic measurements and/or spectral analysis measurements and/or duration measurements and/or fundamental frequency measurements. By way of example only and not by way of limitation, such can correspond to a processor of a computer that is configured to execute the SIL Language Technology Speech Analyzer™ program. In this regard, the program can be loaded onto memory of the system, and the processor can be configured to access the program to analyzer otherwise evaluate the speech. In an alternate embodiment, the speech analyzer can be that available from Rose Medical, which programming can be loaded one to the memory of the system.
In an exemplary embodiment, the central processing assembly can include an audio analyzer, which can analyze one or more of the following parameters: harmonic, noise, gain, level, intermodulation distortion, frequency response, relative phase of signals, etc. It is noted that the above-noted sound analyzers and/or speech analyzers can also analyze one or more of the aforementioned parameters. In some embodiments, the audio analyzer is configured to develop time domain information, identifying instantaneously amplitude as a function of time. In some embodiments, the audio analyzer is configured to measure intermodulation distortion and/or phase. In an exemplary embodiment, the audio analyzer is configured to measure signal-to-noise ratio and/or total harmonic distortion plus noise.
To be clear, in some exemplary embodiments, the central processor apparatus can include a processor that is configured to access software, firmware and/or hardware that is “programmed” or otherwise configured to execute one or more of the aforementioned analyses. By way of example only and not by way of limitation, the central processor apparatus can include hardware in this form of circuits that are configured to enable the analysis detailed above and/or below, the output of such circuitry being received by the processor so that the processor can utilize that output to execute the teachings detailed herein. In some embodiments, the processor apparatus utilizes analog circuits and/or digital signal processing and/or FFT. In an exemplary embodiment, the analyzer engine is configured to provide high precision implementations of AC/DC voltmeter values, (Peak and RMS), the analyzer engine includes high-pass and/or low-pass and/or weighting filters, the analyzer engine can include bandpass and/or Notch filters and/or frequency counters, all of which are arranged to perform an analysis on the incoming signal so as to evaluate that signal and identify certain characteristics thereof, which characteristics are correlated to predetermined scenarios or otherwise predetermined instructions and/or predetermined indications as will be described in greater detail below. It is also noted that in systems that are digitally based, the central processor apparatus is configured to implement signal analysis utilizing FFT based calculations, and in this regard, the processor is configured to execute FFT based calculations.
In an exemplary embodiment, the central processor apparatus is a fixture of a given building (environmental structure). Alternatively and/or in addition to this, the central processor apparatus is a standalone portable device that is located in a case or the like that can be brought to a given location. In an exemplary embodiment, the central processor apparatus can be a personal computer, such as a laptop computer, that includes USB port inputs and/or outputs and/or RF receivers and/or transmitters and is programmed as such (e.g., the computer can have Bluetooth capabilities and/or mobile cellular phone capabilities, etc.). In an exemplary embodiment, the central processor apparatus is configured to receive input and/or provide output utilizing the aforementioned features or any other features.
Returning to the embodiment of
In an exemplary embodiment, microphones 44X are in wired and/or wireless communication with the central processor apparatus, such as in some embodiments where the central processor apparatus is co-located globally with the microphones.
The above-noted ability to collectively evaluate the input from the various sound capture devices and identify at least one spatial location that is more conducive to the hearing with the hearing prosthesis relative to another spatial location can have utilitarian value in a scenario, such as an exemplary scenario according to an exemplary embodiment, where the acoustic environment of a given location (e.g., an auditorium, a theater, a classroom, a movie theater) changes dynamically (e.g., because more people have entered the given structure, because people have left the given structure, because furniture has been moved, because the sources of sound have been moved, etc.). This is opposed to an exemplary scenario where the acoustic environment is effectively static. In an exemplary embodiment, hearing with a hearing prosthesis, such as by way of example only and not by way of limitation, hearing utilizing a cochlear implant, will be different for the recipient vis-à-vis the sensorineural process that occurs that results in the evocation of a hearing percept utilizing the cochlear implant, than what many recipients had previously experienced. Indeed, in an exemplary embodiment, this is the case with respect to a recipient that had previously had natural hearing and/or utilized conventional hearing aids prior to obtaining his or her cochlear implant. In some embodiments of the teachings detailed herein, such can alleviate or otherwise mitigate, if only partially, the presence of an unnoticeable noise source, the presence of location of objects (e.g. walls, window, door, etc.), and/or even the structure of an object (e.g., a corner) that might affect the hearing perception of a recipient of the hearing prostheses in a manner that is less than utilitarian. In an exemplary embodiment, the teachings detailed herein can be utilized in conjunction with noise cancellation and/or suppression systems of the hearing prosthesis, and thus can supplement such. In at least some exemplary embodiments, the teachings detailed herein can be utilized to improve a hearing performance in an environment by identifying a location and/or a plurality of locations which is more conducive to hearing with the hearing prosthesis relative to other locations. By way of example only and not by way of limitation, the teachings detailed herein can be utilized to locate a location and/or a plurality of locations which have relatively less noise and/or reverberation interference with respect to other locations. Moreover, as will be detailed below, in some exemplary embodiments, the teachings detailed herein include devices, systems, and methods that evaluate a given sound environment and determine a given location that has more utility with respect to hearing with the prosthesis relative to other locations based on not only the input from the various sound capture devices, but also based on the recipient's hearing profile. In an exemplary embodiment, the teachings detailed herein provide a device, system, and method that identify location(s) where the recipient can have maximum comfort with respect to utilizing his or her hearing prostheses and/or will experience maximum audibility using the hearing prostheses.
It is noted that while the embodiments detailed herein have focused on about 6 or fewer sound capture devices/microphones, in an exemplary embodiment, the teachings detailed herein can be executed utilizing, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, or 40, or 50 or 60 or 70 or 80 or 90 or 100 microphones or more, or any value or range of values therebetween in increments of 1), which microphones can be utilized to sample or otherwise capture an audio environment all simultaneously or some of them simultaneously, such utilizing F number of microphones simultaneously from a pool of H number of microphones, where F and H can be any number of 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, or 40, or 50 or 60 or 70 or 80 or 90 or 100 (or any number therein, in increments of 1) providing that H is greater than F by at least 1. In an exemplary embodiment, some of the microphones can be statically located in the sound environment during the entire period of sampling, while others can move around or otherwise be moved around. Indeed, in an exemplary embodiment, one subset of microphones remain static during the sampling while other microphones are moved around during the sampling.
It is noted that in at least some exemplary embodiments, sampling can be executed once every or at least 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, or 40, or 50 or 60 or 70 or 80 or 90 or 100 (or any number therein in increments of 1) seconds, minutes or hours and/or that number of times during a given sound event, and in some other embodiments, sound capture can occur continuously for or for at least 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, or 40, or 40, or 50 or 60 or 70 or 80 or 90 or 100 (or any number therein in increments of 1) seconds or minutes or potentially even hours. In some embodiments, the aforementioned sound capture is executed utilizing microphones that remain in place and are not moved during the aforementioned temporal periods of time. In an exemplary embodiment, every time a sampling is executed, one or more or all of the method actions detailed herein can be executed based thereon. That said, in an exemplary embodiment, the sampling can be utilized as an overall sample and otherwise statistically managed (e.g., averaged) and the statistically managed results can be utilized in the methods herein.
In at least some exemplary embodiments, none of the microphones are moved during the period of time that one or more or all of the methods detailed herein are executed. In an exemplary embodiment, more than 90, 80, 70, 60, or 50% of the microphones remain static and are not moved during the course of the execution of the methods herein. Indeed, in an exemplary embodiment, such is concomitant with the concept of capturing sound at the exact same time from a different number of locations that are known. To be clear, in at least some exemplary embodiments, the methods detailed herein are executed without someone moving a microphone from one location to another, at least not in a meaningful way (e.g., the smart phones may be moved a few inches or even a foot or two, but such is not a change to any local position with respect to the global environment). The teachings detailed herein can be utilized to establish a sound field in real-time or close thereto by harnessing signals from multiple mics in a given sound environment. The embodiments herein can provide the ability to establish a true sound field, as opposed to merely identifying the audio state at a single point at a given instant. In this regard, the teachings detailed herein can be utilized to provide advice to a given recipient as to where he or she should go in the enclosed volume, as opposed to whether or not a given location is simply good or bad.
Consistent with the teachings detailed herein, owing to the ability to repeatedly sample and acoustic environment from static locations that remain constant, such as the ability to do so according to the aforementioned temporal periods and/or according to the number of times in the aforementioned temporal periods, the devices, systems, and/or methods herein can thus address and otherwise deal with a rapid change in an audio signal and/or with respect to an audio level at one or more locations.
In an exemplary embodiment, methods, devices, and systems detailed herein can include continuously sampling an audio environment. By way of example only and not by way of limitation, in an exemplary embodiment, the audio environment can be sampled utilizing a plurality of microphones, where each microphone capture sound at effectively the exact same time, and thus the samples occur effectively at the exact same time.
It is noted that the teachings detailed herein are applicable to sound environments that have a significant time dynamic. In exemplary embodiments, the teachings detailed herein are directed to periods of time that are not small, but instead, are significant, as will be described in greater detail below.
In an exemplary embodiment, the central processor apparatus is configured the central receive input pertaining to a particular feature of a given hearing prosthesis. By way of example only and not by way of limitation, such as in the exemplary embodiment where the central processor apparatus is a laptop computer, the keyboard can be utilized by a recipient to input such input. Alternatively, and/or in addition to this, a graphical user interface can be utilized in combination with a mouse or the like and/or a touchscreen system so as to input the input pertaining to the particular feature of the given hearing prostheses. In an exemplary embodiment, the central processor apparatus is also configured to collectively evaluate the input from the plurality of sound capture devices and the input pertaining to the particular feature of the given hearing prosthesis to identify the at least one spatial location that is more conducive to hearing with the particular hearing prosthesis relative to another spatial location. In this regard, by way of example only and not by way of limitation, in an exemplary embodiment, the input pertaining to a particular feature of a given hearing prostheses can be the current gain setting of the hearing prosthesis or otherwise the gain setting of the recipient intends to utilize during the hearing event. In an exemplary embodiment, upon receiving this input, the central processor apparatus utilizes, by way of example only and not by way of limitation, in lookup table that includes in one section data relating to the particular feature of the given hearing prosthesis, and in a correlated section, data associated there with that is utilized in conjunction with the inputs from the plurality of sound capture devices are developed, utilizing an algorithm, such as an if else algorithm, that identifies at least one spatial location that is more conducive to hearing with the particular hearing prosthesis relative to one or more other spatial locations.
In an exemplary embodiment, the spatial location that is identified can be specific to an identifiable location. By way of example only and not by way of limitation, with respect to the embodiment of
Consistent with the teachings above, as will be understood, in an exemplary embodiment, the system can further include a plurality of microphones spatially located apart from one another. In an exemplary embodiment, one or more or all of the microphones or located less than, more than or about equal to X meters apart from one another, where, in some embodiments, X is 0.5, 0.75, 1, 1.5, 2, 2.5, 3, 3.5, 4, 4.5, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 45, 50, 55, 60, 70, 80, 90, 100, 110, 120, 130, 140, 150, 175, 200, or more or any value or range of values therebetween in 0.01 increments (e.g., 4.44, 45.59, 33.33 to 36.77, etc.).
In an exemplary embodiment, consistent with the teachings above, the microphones are configured to output respective signals indicative of respective captured sounds. The system is further configured to provide the respective signals and/or modified signals based on the respective signals to the central processor apparatus as input from the plurality of sound capture devices.
Consistent with the teachings above, such as system 310 of
In an exemplary embodiment, the cellular systems of the cellular phones 240 can be utilized to pinpoint or otherwise determine the relative location and/or the actual locations of the given cell phones, and thus can determine the relative locations and/or actual locations of the given microphones of the system. Such can have utilitarian value with respect to embodiments where the people who own or otherwise possess the respective cell phones will move around or otherwise not be in a static position or otherwise will not be located in a predetermined location. That said, in some exemplary embodiments, there will be a seating regime or the like (e.g., assigned seating at a theater, assigned seating in a classroom, etc.), and thus the system can be configured to correlate the identification of a given sound capture device with a given location that is or should be associated with that sound capture device (e.g., in an exemplary embodiment, the input that is received from the various sound capture devices includes identification tags of the like or some other marker that enables the central processor apparatus to correlate, such as by utilizing a lookup table that is programmed or otherwise present in the memory of the central processor apparatus, a given input with a given person and/or a given location—for example, if the input is from John A's cell phone, and it is noted that John A is sitting at a given location, that can be utilized to determine the spatial location of the sound capture device—for example, if the input includes a carrier or the like that indicates coordinates of the cell phone obtained via triangulation of cell phone towers etc., that can be the way that the system determines the location of the respective sound capture device that provided the given input).
In an exemplary embodiment, the embodiment of
Still further, in at least some exemplary embodiments, the sound capture devices can be the microphones of the hearing prosthesis of given persons, where correlations can be made between the inputs there from according to the teachings herein and/or other methods of determining location. Again, as noted above, the sounds captured can be from the microphones of the hearing prostheses, and in some embodiments, a reverse telecoil system can be used to provide the sound captured to the system. That said, in some embodiments, the hearing prostheses can be configured to evaluate the sound and provide evaluation data based on the sound so that the system can operate based on the evaluation. For example, as with the smart phones, etc., the hearing prosthesis can include and be configured to run any of the programs for analyzing sound detailed herein or variations thereof, to extract information from the sound. Indeed, in an exemplary embodiment, the sound processors of the prostheses without modification are configured to do this (e.g., via their beamforming and/or noise cancellation routines), and the prostheses are configured to output data from the sound processor that otherwise would not be outputted that is indicative of features of the sound.
It is noted that while in some embodiments, the teachings herein can be applied generically to all different types of hearing prostheses, in other embodiments, the teachings detailed herein are specific to a given hearing prostheses. In general, in at least some exemplary embodiments, the determination of location(s) by the system can be based on the specific type of hearing prosthesis that is being utilized for a given recipient. By way of example only and not by way of limitation, in some exemplary embodiments, the system is configured to identify a utilitarian location that more utilitarian for cochlear implant users than for conventional hearing aid users and/or for bone conduction device users, and/or in some embodiments, the system is configured to identify the utilitarian location that is more utilitarian for a hearing prosthesis user that is not a cochlear implant user, such as by way of example only and not by way of limitation, a conventional hearing aid user and/or a bone conduction device user.
Accordingly, in an exemplary embodiment, the hearing prosthesis that is the subject of the above system is cochlear implant, and the system is configured to collectively evaluate the input from the plurality of sound capture devices to identify at least one spatial location that is more conducive to hearing with the cochlear implant relative to another spatial location and relative to that which would be the case for another type of hearing prosthesis. In an exemplary embodiment, the system can utilize a lookup table or the like that is programmed into memory, which lookup table has data points in one section respectively associated with various hearing prostheses, such as the hearing prostheses at issue, and has another section correlated to various weighting factors or the like to weight the results of the analysis of the various signals received from microphones so as to identify the given location that has utilitarian value.
In an exemplary embodiment, the system is configured to receive input indicative of a specific recipient of the hearing prosthesis' hearing profile. This can include features that are associated with the hearing prosthesis and/or can be completely independent of the hearing prostheses. In this exemplary embodiment, the central processor apparatus is configured to collectively evaluate the input from the plurality of sound capture devices and the input indicative of the specific recipient to identify the at least one spatial location that is more conducive to hearing with the particular hearing prosthesis relative to another spatial location.
It is noted that while the embodiments detailed herein depict two-way links between the various components, in some embodiments, the link is only a one way link. By way of example only and not by way of limitation, in an exemplary embodiment, the central processor apparatus can only receive input from the smart phones, but cannot output such input thereto.
It is noted that while the embodiments of
In view of the above, it is understood that in an exemplary embodiment, there is a system that is configured to locate an optimal hearing spot/point/location/area for the recipient. In an exemplary embodiment, this is the optimal hearing spot/point/location/area, and in other embodiments, is one of a plurality of such. In this embodiment, sound capture devices, such as microphones, are located in an environment, which form a network in which the sound capture devices receive and, in some embodiments analyze, the surrounding (local) acoustic signal that enables the relative location of a source (high/low level, intensive/less intensive, etc.) of noise signals or other signals of interest. The system is configured to analyze the microphone signals that are received or otherwise divided from the various devices, and use this information to form one-dimensional, two-dimensional and/or three-dimensional sound field of the environment in which the sound capture devices are located. This could be done by knowing the location of each microphone in the network, and then analyzing the gains and/or phases of the various components in the output of the sound capture devices (the audio content that is captured). This is done, in an exemplary embodiment, in real-time, while in other embodiments, it is not done in real time. In an exemplary embodiment, the system is configured to receive a recipient's hearing profile as part of the criteria for locating and deciding whether the selected acoustic spot/zone would be utilitarian (e.g., ideal) for a given particular individual.
In at least some embodiments, the system is configured to take into account the presence of the objects located in the environment, based on the analyzed relative acoustic signals, and can display or otherwise provide the overall acoustic landscape/sound-field of the environment. In an exemplary embodiment, this is done by providing such directly and individually to the recipient of the prosthesis, such as by way of example only and not by way of limitation, via Google Glasses and/or the smart phone display, etc. In an exemplary embodiment, this can have utilitarian value with respect to providing this information discreetly to the recipient of the prostheses. Any device, system, and/or method that will enable the action of providing information to the recipient, whether such is tailored specifically to the recipient or is general to someone who utilizes a hearing prosthesis, can be utilized in at least some embodiments. Indeed, in an exemplary embodiment, a display is provided at an entrance of the like to an auditorium, which display indicates areas that have utilitarian value with respect to providing a better hearing experience for a given recipient and/or for a general recipient of a hearing prosthesis relative to other areas. Still, consistent with the embodiment that utilizes the smart phone or the like (as represented by the two-way link), the system can provide an interactive communication with the recipient indicate the location that has the better and/or best acoustic environment, which, in some embodiments, is matched to the individual's hearing profile and/or specific needs.
In an exemplary scenario, where a plurality of microphones are present in a given environment, an acoustic landscape of a theater and/or a concert hall, sport's arena, church, auditorium, etc., can be analyzed. The respective microphones of the respective sound capture devices can, for example, be utilized to obtain information indicative of the approximate level of noise at the location thereof. In an exemplary embodiment, this is done by simply capturing sound and then streaming the sound and/or a modified version of the signal thereof to the central processing assembly. In an exemplary embodiment, this is done by utilizing the remote specific devices (e.g. smart phone) to analyze the sound, such as by way of example only and not by way of limitation, utilizing an application thereof/stored thereon to determine a given sound level and/or noise level at that location, and then the respective devices can output a signal to the central processor apparatus indicative of the noise level local to the sound capture device. In some embodiments, the audio data is analyzed in real time, while in other embodiments, it is not so analyzed.
In an exemplary embodiment, such as when the sound capture devices are formed in a network, such can be used/is used to provide a relative signal to noise level across the entire room/enclosed volume. Depending on the nature of the volume and/or how objects therein are arranged, an overall acoustic landscape and/or sound-field can be developed, where several spots are considered excellent or good while the other territory is considered relatively inferior.
In an exemplary embodiment, the signal to noise ratios that are utilized to evaluate the captured sound are based on the fact that it is known what is being focused on and/or what the sound is classified as. In an exemplary embodiment, clips of sound can be utilized as a basis for the evaluation. That is, the captured sound can be captured in clips, or otherwise the captured sound can be reduced into clips, whereupon the clips are evaluated.
Method 800 further includes method action 820, which includes evaluating the captured sounds. By way of example only and not by way of limitation, such can correspond to comparing a noise level in a first sound to a noise level in a second sound. Still further by way of example, such can correspond to comparing a phase of the first captured sound and a phase of the second captured sound. In an exemplary embodiment, the decibel level of the output signals can be compared to one another. In an exemplary embodiment, as will be described in greater detail below, the signals can be analyzed for reverberant sound. Note further that other exemplary comparisons can be utilized. Note also that in at least some exemplary embodiments, method action 820 need not rely on or otherwise utilize comparison techniques. Any type of evaluation can be executed to enable the teachings detailed herein.
In an exemplary embodiment, the action of evaluating the captured sound and method action 820 includes comparing respective gains of the captured sound and/or comparing respective phases of the captured sound.
In an exemplary embodiment, any Real-Time Audio Analyzer that is commercially available can be used or otherwise adapted for the system, such as Keysight or Rohde & Schwarz multi-channel audio analyzers. Any device that is configured to perform real-time analysis of multi-channel audio signals in the time and frequency domain can be used, such as the RSA7100A Real-Time Spectrum Analyzer or the Keysight X-Series Signal Analyzers. In an exemplary embodiment, processing is done by a computer, and the microphone inputs could be sampled and digitized, and provided to the computer, where a software package that exists for audio analysis, is stored thereon, such as Audacity, and the software package analyzes such.
Method 800 further includes method action 830, which includes developing one or more acoustic landmarks based on the captured sound. By way of example only and not by way of limitation, an acoustic landmark can correspond to a location of relative high background noise, a location of relative low background noise, a location of relative synchronization of phases of the sound at a given location, a location relative non-synchronization of phases of sound at a given location, etc. Note that there can be a plurality of acoustic landmarks. In an exemplary embodiment, the action of developing one or more acoustic landmarks in method action 830 can include the action of utilizing known locations of the respective sound capture devices relative to a fixed location and/or relative to one another in combination with the evaluated captured sound to develop weighted locations weighted relative to sound quality. In an exemplary embodiment, the action of developing one or more acoustic landmarks includes the action of evaluating the evaluated captured sound in view of data particular to a hearing related feature of a particular recipient of a hearing prosthesis (e.g., Jane B., Robert C., or a generic individual, such as Ticket Holder for Seat 333, etc.). By way of example only and not by way of limitation, in an exemplary embodiment, the data particular to a hearing related feature of a particular recipient can correspond to the recipient's inability to hear high frequency and/or middle frequencies and/or the inability to hear sounds below a certain decibel level. Still further, method action 830 can include identifying a location conducive to hearing ambient sound originating in the vicinity of the sound capture devices based on the evaluation of the evaluated captured sound evaluated in view of the data indicative of the recipient of a hearing prosthesis.
In view of the above, in an exemplary embodiment, the results of method 800 can be different for different individuals, such as individuals who utilize the same type of hearing prosthesis (cochlear implant, middle ear implant or bone conduction device) and/or the result of method 800 can be different for different individuals who utilize different types of hearing prostheses.
In an exemplary embodiment, method action 830 includes developing one or more acoustic landmarks by determining a spatial location where there is minimal noise and/or reverberation interference relative to another spatial location based on the evaluation of the captured sound.
Consistent with the specific teachings herein, in an exemplary embodiment, the acoustic landmark(s) developed in method action 830 can be geographical location(s) at which a cochlear implant recipient will have a more realistic hearing percept relative to other geographic locations. Consistent with the concept of utilizing a global approach, the geographic locations are geographic locations of the local area.
It is briefly noted that unlike method 800 above, the action of capturing sound need not be executed simultaneously. By way of example only and not by way of limitation, in an exemplary embodiment, method 1200 can be executed utilizing a microphone, such as the same microphone, and moving the microphone from location to location over a period of time. This as opposed to method 800, where a plurality of microphones are utilized to capture sound at the exact same time.
Method 1200 further includes method action 1230, which includes developing a sound field of the locality. In an exemplary embodiment, the developed sound field can correspond to that depicted in
Alternatively, and/or in addition to this, consistent with the teachings detailed above, in an exemplary embodiment, the action of developing the sound field of the locality can include the action of evaluating the evaluated captured sound in view of statistical data relating to cochlear implant users. In this regard, there is data available and/or there is data that can be developed over a statistically significant group of cochlear implant users that can enable statistically significant factors to be deduced based there from. In this regard, the sound field of the locality can be developed so as to identify locations that are conducive or otherwise favorable to improving the hearing experience of a statistically normal cochlear implant user. By way of example only and not by way of limitation, it is known that cochlear implants have an electrical sound/synthesized sound. Some may consider the sound to be analogous to a breathless person speaking in a hushed manner. A location in the locality or a plurality of locations in the locality can be identified where the captured sound will be more compatible with the hearing percept evoked by a cochlear implant relative to other locations. By way of example only and not by way of limitation, a location where sounds are more pronounced and otherwise have little reverberant sound therein or otherwise minimize reverberant sound relative to other locations can be identified when developing the sound field of the locality. Of course, in some embodiments, the sound field of the locality can simply correspond to indicators that indicate that such a location is useful for a cochlear implant users. Of course, in some embodiments, the action of evaluating the captured sound can be executed in view of statistical data relating to other types of hearing implant recipients, such as, for example, a middle ear implant recipients and/or bone conduction recipients and/or normal conventional hearing aid recipients, etc. Moreover, in some embodiments, the action of evaluating the captured sound can be executed in view of statistical data related to a specific model or design of a given implant. By way of example only and not by way of limitation, in an exemplary embodiment, if the cochlear implant is a so-called small or short cochlear implant electrode array design configured to preserve residual hearing, the action of developing a sound field of the locality correspond to providing indicators of locations where a recipient utilizing such design and/or model will have a better hearing experience relative to other locations. Indeed, in an exemplary embodiment, the sound field can indicate locations for total electric hearing persons as well as for persons that have partial electric hearing in a given ear.
By way of example only and not by way of limitation, in an exemplary embodiment, features specific to an individual recipient that are utilized to develop the sound fields herein and/or to develop one or more acoustic landmarks herein, etc., can include a dynamic range function with respect to frequency, the given signal processing algorithm that is utilized for a particular recipient, or a feature thereof that is significant with respect to executing the methods detailed herein, an acoustic/electric hearing audiogram, whether or not the recipient is utilizing a noise cancellation algorithm with his or her hearing prosthesis, one or more or all of the variable settings of the prosthesis. It is also noted that the teachings detailed herein can be utilized in a dynamic manner with respect to changing recipient factors. By way of example only and not by way of limitation, in an exemplary embodiment, there can be a scenario where the recipient changes a setting or feature on his or her hearing prosthesis. In an exemplary embodiment, this could initiate a function of the system that provides an indication to the recipient that he or she should change a location or the like owing to this change in the setting. For example, in an exemplary embodiment, the teachings detailed herein are implemented based in part on a given setting or a given variable feature (variable within a sound environment period, such as during a concert, etc.). Accordingly, when such features change, the data developed that is specific to that recipient may no longer be correct and/or a better location may exist. The teachings detailed herein include an embodiment where, during a sound event, such as a concert, a movie, a classroom lesson, etc., something that has a discrete beginning and end, typically accompanied by movement of people in and/or out of an enclosed environment, something changes, which change results in a different utilitarian position for the recipient than that which was previously the case. In an exemplary embodiment, the teachings detailed herein include continuously or semi-continuously or otherwise periodically updating an acoustic landmark data set and/or an acoustic landscape, etc., and providing the recipient with the updated information, and/or which can include indicating to the recipient, automatically, or even manually, in some instances, that there are other locations that the recipient may find more utilitarian than that which was previously the case. In an alternate embodiment, a system could also suggest to the recipient to adjust the device settings, due to the change in the soundfield and/or utilize a knowledge of a change in the audio environment over a spatial region to trigger a device setting change.
To be clear, any of the teachings detailed herein can be executed 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, or 30 times or more during a given sound event. In this regard, in an exemplary embodiment, one or more or all of the methods are executed one of the aforementioned times during a given sound event.
In at least some exemplary embodiments, it is noted that method action 800 can be repeated at different temporal locations and/or utilizing different spatial locations. In this regard, in an exemplary embodiment,
It is noted that in at least some exemplary embodiments, method 800 is repeated a number of times. In this regard,
In an exemplary embodiment of method 1300, the method further includes the action of identifying a recurring time period where, statistically, the sound environment is more conducive to a recipient of a hearing prosthesis relative to other time periods based on a comparison of at least the first and second sound fields (or Nth sound fields). In an exemplary embodiment, such an exemplary method can be utilized to determine when, for example, the best time or worse time to visit a restaurant or some other location for a given recipient of a hearing prosthesis and/or for a statistically normal member of a population of hearing prosthesis recipients. That is, beyond developing an overall acoustic landscape/sound field in accordance with the teachings detailed above, some embodiments of the teachings detailed take into the account of the dynamic changing acoustic environment of a given location over time. By way of example only and not by way of limitation, such as by utilizing the exemplary connectivity offered by a modern media platform, the teachings detailed herein can be utilized to provide an analyzed acoustic environment based on a multi-microphone system that is present in a given environment. Throughout the hours, days, and/or weeks, a general pattern and/or general patterns of the acoustic environment can be built up over time. This pattern and/or patterns can be utilized to determine when would be good and/or bad for the recipient to visit the given location. By way of example only and not by way of limitation, the patterns can indicate relative periods of low background noise, and thus the recipient can choose those periods of time to visit the restaurant so as to have a pleasant meal while engaging in a conversation with his and/or her friend so that it will be less demanding or otherwise fatiguing to understand or otherwise listen to the speaker because there will be less background noise during those periods of time. It is to be understood that in at least some exemplary embodiments, this can be combined with the other methods detailed herein so as to find both a good location to sit in the restaurant as well as to find a good time to visit the restaurant.
Note further that in at least some embodiments, this concept can be applied to a given locality so as to find a local location that is conducive to the hearing, which local location could potentially be time-based with respect to a pattern. By way of example only and not by way of limitation, with respect to the aforementioned restaurant example, it can be found that in some instances, during some time periods, it is better to sit at table 5 facing the door, and during other time periods, it is better to sit at table 4 or table 7 facing away from the door, while in other time periods there really just is no good place to sit.
In an exemplary embodiment, method 1500 further includes method action 1530, which includes providing indicators of the sound field indicating locations conducive to hearing with a hearing prosthesis. Such can correspond to highlighting areas in the sound field that are conducive for people with certain types of hearing prostheses, and highlighting areas in a different manner in the sound field that are conducive for people with other types of hearing prostheses, etc.
As noted above, in an exemplary embodiment, there can be utilitarian value with respect to evaluating or otherwise determining locations of high or low or medium background noise. In an exemplary embodiment, the action of developing the sound field can include evaluating the captured sound to identify locations of lower background noise relative to other locations, all other things being equal. By way of example only and not by way of limitation, in an exemplary scenario, such can have utilitarian value with respect to identifying locations that have utility for children with cochlear implants and/or other types of hearing prostheses. In an exemplary scenario, there are one or more children who attend school who utilize cochlear implants, where a frustrating issue for one or more or all of those children is the inability or otherwise the difficulty of hearing clearly that which the teacher speaks in a classroom because they are assigned to a given seat because there can be too much background noise at that given location (e.g., reverberant noise from an HVAC duct, etc.). In this exemplary scenario, the ability to learn is highly impacted by the ability of the child to hear the teacher's speech. In this exemplary scenario, the acoustical environment of the classroom greatly influences the speech intelligibility of the child.
In this exemplary scenario, by way of example only, the background noise (e.g. fan, air conditioner, etc.), can impact the overall sound field that makes up the acoustic landscape in the classroom. It is noted that while this scenario will focus on background noise, it is noted that in other exemplary embodiments, other features, such as room reverberation, the talking and playing of other children, and/or other classroom acoustical sounds can also impact the makeup of the acoustic landscape of the classroom.
In this exemplary scenario, the sound landscape/acoustical landscape is such that it will make a huge impact as to the hearing perception of a child if he/she is sitting at the center of the classroom or at the edge or the back of the classroom. In this exemplary scenario however, it is not known that this is the case. Accordingly, the teachings detailed herein are utilized to find the useful location (for a given time, also, in some embodiments) for the child to sit in the classroom relative to other locations so as to maximize or otherwise improve the speech intelligibility of the cochlear implant recipients student.
In this exemplary scenario, the teachings detailed herein can be utilized to aid the teacher or parent of the child or other caregiver of the child or even a social service worker to locate the optimal spot in the classroom (at a given time, in some embodiments, where, in some scenarios, the student will be moved or otherwise be permitted to move from one seat to another seat as time progresses owing to a change in the acoustical landscape with time in that given room) in which the speech intelligibility not be deleteriously affected and/or the location where speech intelligibility will be improved. In an exemplary embodiment, this can enable one to better understand and design the layout of a classroom, to ensure that no children are disadvantaged or otherwise to lessen the likelihood that the children are disadvantaged.
It is noted that in at least some exemplary embodiments, the methods detailed herein can be practiced in conjunction with the utilization of an FM wireless audio streaming device where the teacher speaks into a microphone or otherwise where there is a microphone that better captures the teacher's speech, and the resulting signal is wirelessly related to the prosthesis. That said, in at least some exemplary embodiments, the methods detailed herein are explicitly not practiced in conjunction with the utilization of an FM wireless audio streaming device. In this regard, in an exemplary embodiment, this can alleviate resulting hardware and complexity and the time to set up such a system, and can also prevent the scenario where the children utilizing these devices begin do rely on such systems too much, and thus have difficulties learning or otherwise understanding speech in locations or otherwise in localities where such systems are not present. Accordingly, in an exemplary embodiment, there is a method that includes any one or more the method actions detailed herein, along with the method action of capturing sound utilizing a hearing prosthesis at a location based on one or more of the method actions detailed herein. In an exemplary embodiment, this method is executed without utilizing the aforementioned FM wireless audio streaming device.
In an exemplary embodiment, the methods herein can be executed in conjunction with a Telecoil/Room Loop booster system. By way of example, a set of receivers could be used to generate a map of the electromagnetic field of the classroom or any other area having a Telecoil, such as a movie theater, or an auditorium, etc., resulting from the Telecoil, indicating the position for the child to sit to ensure or otherwise improve the likelihood that the prosthesis or other device that receives the signal (e.g., a translation signal for a translation device) Telecoil/Room Loop picks up a utilitarian signal, and/or the strongest signal. Accordingly, in an exemplary embodiment, the teachings detailed herein corresponding to the aforementioned sound fields or otherwise utilizing such also corresponds to a disclosure where the soundfield is instead an electromagnetic field, and the teachings are adapted accordingly to evaluate features of the electromagnetic spectrum as opposed to the sound spectrum.
Method 1600 further includes method action 1620, which includes evaluating the data to determine at least one spatially linked acoustic related data point based on one or more hearing related features of a specific hearing impaired person. In an exemplary embodiment, the hearing related feature of the specific individual is that the individual relies on a hearing prosthesis to hear. This is as opposed to a person who is hard of hearing who does not utilize or otherwise does not have on his or her body and operational hearing prosthesis (e.g., it was left at home, it ran out of battery power, etc.), which is still a hearing-impaired individual.
In an exemplary embodiment, the hearing related feature of the specific individual is that the individual has below average dynamic hearing perception at a certain sound level and/or at a particular frequency. Further, the spatially linked acoustic related data point is a location in the enclosed environment were the effects of the below average dynamic hearing perception will be lessened relative to other locations.
In an exemplary embodiment, the hearing related feature of the specific individual is that the individual has below average hearing comprehension at certain reverberation levels. Further, the spatially linked acoustic related data point is a location in the enclosed environment where reverberation levels are lower than at other locations.
In an exemplary embodiment, the hearing related feature of the specific individual is a current profile of a variable profile of a hearing prosthesis worn by the individual. By way of example only and not by way of limitation, in an exemplary embodiment, the profile can be the gain profile and/or the volume profile of a hearing prosthesis, which profile can be changed by the recipient. In this regard, in an exemplary embodiment, method action 1620 is executed based on the current profile (e.g., setting) of, for example, the volume of the prosthesis. Note also that in at least some exemplary embodiments, the variable profile of the hearing prosthesis can be a setting of a noise cancellation system that has various settings and/or the profile can simply be whether or not this system has been activated or not. Still further, the variable profile of the hearing prosthesis can be a beamforming system, and the variable profile can be setting of the beamforming system and/or whether or not the beamforming system is activated. Indeed, in an exemplary embodiment, the one or more hearing related features of a specific hearing-impaired individual can be whether or not the prosthesis that is being utilized by an individual even has a noise cancellation system and/or a beamforming system, etc.
Consistent with the teachings above, in an exemplary embodiment, the action of receiving data indicative of sound captured can be executed effectively simultaneously by a plurality of respective microphones of portable devices of transient people relative to the enclosed environment with no relationship to one another are present in the enclosed environment.
In an exemplary embodiment, there is a method, comprising capturing sound at a plurality of respectively effectively spatially separated locations of a locality, evaluating the captured sound, developing a sound field of the locality. In an exemplary embodiment of this embodiment, the action of developing the sound field includes evaluating the evaluated captured sound based on signal to noise ratios of a microphone. In an exemplary embodiment, the methods detailed above and/or below include presenting the sound field of the locality to people who are and/or will be present in the locality and providing indicators of the sound field indicating locations conducive to hearing with a hearing prosthesis. In an exemplary embodiment, the methods detailed above and/or below include evaluating the evaluated captured sound to identify locations of lower background noise relative to other locations, all other things being equal.
It is noted that the disclosure herein includes analysis being executed by certain devices and/or systems. It is noted that any disclosure herein of an analysis also corresponds to a disclosure of an embodiment where an action is executed based on an analysis executed by another device. By way of example only and not by way of limitation, any disclosure herein of a device that analyzes a certain feature and then reacts based on the analysis also corresponds to a device that receives input from a device that has performed the analysis, where the device acts on the input. Also, the reverse is true. Any disclosure herein of a device that acts based on input also corresponds to a device that can analyze data and act on that analysis.
It is noted that any disclosure herein of instructions also corresponds to a disclosure of an embodiment that replaces the word instructions with information, and vice versa.
It is noted that any disclosure herein of an alternate arrangement and/or an alternate action corresponds to a disclosure of the combined original arrangement/original action with the alternate arrangement/alternate action.
It is noted that any method action detailed herein also corresponds to a disclosure of a device and/or system configured to execute one or more or all of the method actions associated there with detailed herein. In an exemplary embodiment, this device and/or system is configured to execute one or more or all of the method actions in an automated fashion. That said, in an alternate embodiment, the device and/or system is configured to execute one or more or all of the method actions after being prompted by a human being. It is further noted that any disclosure of a device and/or system detailed herein corresponds to a method of making and/or using that the device and/or system, including a method of using that device according to the functionality detailed herein.
It is noted that embodiments include non-transitory computer-readable media having recorded thereon, a computer program for executing one or more or any of the method actions detailed herein. Indeed, in an exemplary embodiment, there is a non-transitory computer-readable media having recorded thereon, a computer program for executing at least a portion of any method action detailed herein.
It is further noted that any disclosure of a device and/or system detailed herein also corresponds to a disclosure of otherwise providing that device and/or system.
It is further noted that any element of any embodiment detailed herein can be combined with any other element of any embodiment detailed herein unless stated so providing that the art enables such. It is also noted that in at least some exemplary embodiments, any one or more of the elements of the embodiments detailed herein can be explicitly excluded in an exemplary embodiment. That is, in at least some exemplary embodiments, there are embodiments that explicitly do not have one or more of the elements detailed herein.
While various embodiments of the present invention have been described above, it should be understood that they have been presented by way of example only, and not limitation. It will be apparent to persons skilled in the relevant art that various changes in form and detail can be made therein without departing from the scope of the invention.
This application claims priority to U.S. Provisional Application No. 62/563,145, entitled ACOUSTIC SPOT IDENTIFICATION, filed on Sep. 26, 2017, naming Alexander VON BRASCH of Macquarie University, Australia as an inventor, the entire contents of that application being incorporated herein by reference in its entirety.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/IB2018/057420 | 9/25/2018 | WO | 00 |
Number | Date | Country | |
---|---|---|---|
62563145 | Sep 2017 | US |