Aspects of the present invention relate generally to synchronized spectral analysis.
Medical devices have provided a wide range of therapeutic benefits to recipients over recent decades. Medical devices can include internal or implantable components/devices, external or wearable components/devices, or combinations thereof (e.g., a device having an external component communicating with an implantable component). Medical devices, such as traditional hearing aids, partially or fully-implantable hearing prostheses (e.g., bone conduction devices, mechanical stimulators, cochlear implants, etc.), pacemakers, defibrillators, functional electrical stimulation devices, and other medical devices, have been successful in performing lifesaving and/or lifestyle enhancement functions and/or recipient monitoring for a number of years.
The types of medical devices and the ranges of functions performed thereby have increased over the years. For example, many medical devices, sometimes referred to as “implantable medical devices,” now often include one or more instruments, apparatus, sensors, processors, controllers or other functional mechanical or electrical components that are permanently or temporarily implanted in a recipient. These functional devices are typically used to diagnose, prevent, monitor, treat, or manage a disease/injury or symptom thereof, or to investigate, replace or modify the anatomy or a physiological process. Many of these functional devices utilize power and/or data received from external devices that are part of, or operate in conjunction with, implantable components.
In one aspect presented herein, a method is provided. The method comprises: establishing a data link between a first hearing device configured to be disposed at a first side of a head of a user and a second hearing device configured to be disposed at a second side of the head of the user; obtaining a synchronization event from the data link; and using the synchronization event to align spectral analysis of audio signals at the first hearing device with spectral analysis of audio signals at the second hearing device.
In another aspect presented herein, a method is provided. The method comprises: receiving first audio data at a first hearing device of a binaural hearing device system; performing spectral analysis of the first audio data at the first hearing device; aligning a timing of the spectral analysis of the first audio data at the first hearing device with a timing of spectral analysis of second audio data at a second hearing device of the binaural hearing device system; and following the spectral analysis, generating, at the first hearing device, a first sequence of stimulation signals representative of the first audio data.
In another aspect presented herein, a binaural hearing device system is provided. The binaural hearing device system comprises: a first hearing device located at a first ear of a user and including one or more first processors configured to: obtain a first set of audio samples, and capture one or more buffers of the first set of audio samples, a second hearing device located at a second ear of the user and including one or more second processors configured to: obtain a second set of audio samples, and capture one or more buffers of the second set of audio samples, wherein the one or more first processors and the one or more second processors are configured to cooperate to synchronize the capture of the one or more buffers of the first set of audio samples with the capture of the one or more buffers of the second set of audio samples
In another aspect, one or more non-transitory computer readable storage media encoded with instructions are provided. The or more non-transitory computer readable storage media include instructions that, when executed by one or more processors, cause the one or more processors to: establish a data link between the first hearing device and a second hearing device configured to be located at a second side of a head of the user; obtain a synchronization event from the data link; and use the synchronization event to align spectral analysis of audio signals at the first hearing device with spectral analysis of audio signals at the second hearing device.
In another aspect presented herein, a hearing device configured to be worn on a first side of a head of a user is provided. The hearing device comprises: one or more sound inputs configured to receive a first set of sound signals associated with at least one sound source; a wireless transceiver configured to form a data link with a second hearing device configured to be disposed at a second side of the head of the user; and at least one processor configured to perform spectral analysis of the first set of sound signals, wherein a timing of the spectral analysis is based on at least one characteristic of the data link.
In another aspect presented herein, a system is provided. The system comprises: a first sensory device including one or more first processors configured to: obtain a first set of input samples, and capture one or more buffers of the first set of input samples; and a second sensory device including one or more second processors configured to: obtain a second set of input samples, and capture one or more buffers of the second set of input samples, wherein the one or more first processors and the one or more second processors are configured to cooperate to synchronize the capture of the one or more buffers of the first set of input samples with the capture of the one or more buffers of the second set of input samples.
In another aspect, a method is provided. The method comprises: establishing a data link between a first device and a second device; receiving first data at the first device; performing spectral analysis of the first data at the first device; aligning a timing of the spectral analysis of the first data at the first device with a timing of spectral analysis of second data at a second device based on the information obtained from the data link.
Embodiments of the present invention are described herein in conjunction with the accompanying drawings, in which:
Many devices, such as hearing devices, certain consumer electronics, wearable devices, etc., operate by performing “spectral analysis” of sound signals. As used herein, “spectral analysis” refers to a process to determine the frequency contents of received time domain sound signals (e.g., convert time domain signals into the frequency domain). Presented herein are techniques for synchronized spectral analysis in systems comprising first and second separate/independent devices. That is, the techniques presented herein are configured to enable the first and second devices to perform spectral analysis at the same time on contemporaneous input signals/data.
For ease of illustration, the techniques presented herein are primarily described with reference to hearing device systems comprises of at least two devices that operate to convert sound signals into one or more acoustic, mechanical, and/or electrical stimulation signals for delivery to a user/recipient. The one or more hearing devices that can form part of a hearing device system include, for example, one or more personal sound amplification products (PSAPs), hearing aids, cochlear implants, middle ear stimulators, bone conduction devices, brain stem implants, electro-acoustic cochlear implants or electro-acoustic devices, and other devices providing acoustic, mechanical, and/or electrical stimulation to a recipient.
One specific type of hearing device system, referred to herein as a “binaural hearing device system” or more simply as a “binaural system,” includes two hearing devices, where one of the two hearing prosthesis is positioned at each ear of the recipient. More specifically, in a binaural system each of the two hearing devices provides stimulation to one of the two ears of the recipient (i.e., either the right or the left ear of the recipient). The binaural system can include any combination of one or more personal sound amplification products (PSAPs), hearing aids, middle ear auditory prostheses, bone conduction devices, direct acoustic stimulators, electro-acoustic prostheses, auditory brain stimulators, cochlear implants, combinations or variations thereof, etc., For example, embodiments presented herein can be implemented in binaural systems comprising two hearing aids, two cochlear implants, a hearing aid and a cochlear implant, or any other combination of the above or other devices. As such, in certain embodiments, the techniques presented herein enable synchronized spectral analysis in binaural hearing device systems comprising first and second hearing devices positioned at first and second ears, respectively, of a recipient.
As noted above, it is to be appreciated that the techniques presented herein may be implemented with any of a number of systems, including in conjunction with cochlear implants or other hearing devices, balance prostheses (e.g., vestibular implants), retinal or other visual prostheses, cardiac devices (e.g., implantable pacemakers, defibrillators, etc.), seizure devices, sleep apnea devices, electroporation devices, spinal cord stimulators, deep brain stimulators, motor cortex stimulators, sacral nerve stimulators, pudendal nerve stimulators, vagus/vagal nerve stimulators, trigeminal nerve stimulators, diaphragm (phrenic) pacers, pain relief stimulators, other neural, neuromuscular, or functional stimulators, etc. In further embodiments, the presented herein may also be implemented by, or used in conjunction with, systems comprising remote microphone devices, consumer electronic devices, etc.
Merely for ease of description, aspects of the techniques will be generally described with reference to a specific system, namely a bilateral cochlear implant system. As used herein, a “bilateral cochlear implant system” is a specific type of binaural system that includes first and second cochlear implants located at first and second ears, respectively, of a recipient. In such systems, each of the two cochlear implant system delivers stimulation (current) pulses to one of the two ears of the recipient (i.e., either the right or the left ear of the recipient). In a bilateral cochlear implant system, one or more of the two cochlear implants may also deliver acoustic stimulation to the ears of the recipient (e.g., an electro-acoustic cochlear implant) and/or the two cochlear implants need not be identical with respect to, for example, the number of electrodes used to electrically stimulate the cochlea, the type of stimulation delivered, etc.
Referring specifically to
The cochlear implant 102R is substantially similar to cochlear implant 102L. In particular, cochlear implant 102R includes an external component 104R comprising a sound processing unit 106R, and an implantable component 112R comprising internal coil 114R, stimulator unit 142R, and elongate stimulating assembly 116R.
As noted, the external component 104L of cochlear implant 102L includes a sound processing unit 106L. The sound processing unit 106L comprises one or more input devices 113L that are configured to receive input signals (e.g., sound or data signals). In the example of
The sound processing unit 106L also comprises one type of a closely-coupled transmitter/receiver (transceiver) 122L, referred to as or radio-frequency (RF) transceiver 122L, a power source 123L, and a processing module 124L. The processing module 124L comprises one or more processors 125L and a memory 126L that includes sound processing logic 127L and spectral analysis synchronization logic 128L.
In the examples of
The implantable component 112L comprises an implant body (main module) 134L, a lead region 136L, and the intra-cochlear stimulating assembly 116L, all configured to be implanted under the skin/tissue (tissue) 115 of the recipient. The implant body 134L generally comprises a hermetically-sealed housing 138L in which RF interface circuitry 140L and a stimulator unit 142L are disposed. The implant body 134L also includes the internal/implantable coil 114L that is generally external to the housing 138L, but which is connected to the transceiver 140L via a hermetic feedthrough (not shown in
As noted, stimulating assembly 116L is configured to be at least partially implanted in the recipient's cochlea. Stimulating assembly 116L includes a plurality of longitudinally spaced intra-cochlear electrical stimulating contacts (electrodes) 144L that collectively form a contact or electrode array 146L for delivery of electrical stimulation (current) to the recipient's cochlea.
Stimulating assembly 116L extends through an opening in the recipient's cochlea (e.g., cochleostomy, the round window, etc.) and has a proximal end connected to stimulator unit 142L via lead region 136L and a hermetic feedthrough (not shown in
As noted, the cochlear implant 102L includes the external coil 108L and the implantable coil 114L. The coils 108L and 114L are typically wire antenna coils each comprised of multiple turns of electrically insulated single-strand or multi-strand platinum or gold wire. Generally, a magnet is fixed relative to each of the external coil 108L and the implantable coil 114L. The magnets fixed relative to the external coil 108L and the implantable coil 114L facilitate the operational alignment of the external coil 108L with the implantable coil 114L. This operational alignment of the coils enables the external component 104L to transmit data, as well as possibly power, to the implantable component 112L via a closely-coupled wireless link formed between the external coil 108L with the implantable coil 114L. In certain examples, the closely-coupled wireless link is a radio frequency (RF) link. However, various other types of energy transfer, such as infrared (IR), electromagnetic, capacitive and inductive transfer, may be used to transfer the power and/or data from an external component to an implantable component and, as such,
As noted above, sound processing unit 106L includes the processing module 124L. The processing module 124L is configured to convert received input signals (received at one or more of the input devices 113L) into output signals 145L for use in stimulating a first ear of a recipient (i.e., the processing module 124L is configured to perform sound processing on input signals received at the sound processing unit 106L). Stated differently, in the sound processing mode, the one or more processors 125L are configured to execute sound processing logic stored, for example, in in memory 126L to convert the received input signals into output signals 145L that represent electrical stimulation for delivery to the recipient.
In the embodiment of
As noted, cochlear implant 102R is substantially similar to cochlear implant 102L and comprises external component 104R and implantable component 112R. External component 104R includes a sound processing unit 106R that comprises external coil 108R, input devices 113R (i.e., one or more sound input devices 118R, one or more auxiliary input devices 119R, and wireless transceiver 120R), closely-coupled transceiver (RF transceiver) 122R, power source 123R, and processing module 124R. The processing module 124R includes one or more processors 125R and a memory 126R that includes sound processing logic 127R and spectral analysis synchronization logic 128R. The implantable component 112R includes an implant body (main module) 134R, a lead region 136R, and the intra-cochlear stimulating assembly 116R, all configured to be implanted under the skin/tissue (tissue) 115 of the recipient. The implant body 134R generally comprises a hermetically-sealed housing 138R in which RF interface circuitry 140L and a stimulator unit 142L are disposed. The implant body 134R also includes the internal/implantable coil 114R that is generally external to the housing 138R, but which is connected to the RF interface circuitry 140R via a hermetic feedthrough (not shown in
It is to be appreciated that the arrangements of cochlear implants 102L and 102R, as shown in
The cochlear implants 102L and 102R are configured to establish a binaural wireless communication link/channel 162 (binaural wireless link) that enables the cochlear implants 102L and 102R (e.g., the sound processing units 104L/104R and/or the implantable components 112l/122, if equipped with wireless transceivers) to wirelessly communicate with one another. The binaural wireless link 162 can be, for example, a magnetic induction (MI) link, a standardized wireless channel, such as a Bluetooth®, Bluetooth® Low Energy (BLE) or other channel interface making use of any number of standard wireless streaming protocols, a proprietary protocol for wireless exchange of data, etc. Bluetooth® is a registered trademark owned by the Bluetooth® SIG. The binaural wireless link 162 is enabled by the wireless transceivers 120L and 120R.
The sound processing performed at each of the cochlear implant 102L and the cochlear implant 102R (e.g., at the sound processing units 104L/104R and/or the implantable components 112l/122, if equipped with processing modules) includes some form of spectral analysis (e.g., some process to determine the frequency contents of received time domain sound signals). In certain examples, as described further below, the spectral analysis can include a filter-bank (filterbank) analysis.
The present inventors have recognized that, for a binaural hearing device system, such as bilateral cochlear implant system 100, binaural synchronization of the spectral analysis (e.g., filter-bank analysis window/buffer timing) is important, as the content of each buffer is a factor in the analysis of the input sound that will eventually be converted to stimulation. As such, presented herein are techniques for aligning/synchronizing the spectral analysis across both devices of a binaural hearing device system.
More specifically, referring to the example of
As noted above, the cochlear implant 102L and cochlear implant 102R are configured to communicate with one another via a binaural wireless link 162. The cochlear implants 102L/102R establish the binaural wireless link 162 (e.g., over magnetic induction, Bluetooth®, etc.) and implement a synchronization scheme to ensure the link 162 operates to reliably transmit data from one device to the other. That is, the binaural communication channel 162 can operate by defining time slots for communication with a timing synchronization mechanism that the cochlear implants 102L/102R use to prevent collisions of transmitted data. In accordance with embodiments presented herein, the synchronization of the binaural wireless link 162 is used to synchronize the spectral analysis at the cochlear implants 102L and 102R.
For example, in certain embodiments, when established, one or more of the wireless transceivers 120L/120R issues a “synchronization event” indicating that the binaural wireless link 162 is operational and synchronized. This synchronization event indicates a synchronized time base for the binaural wireless link 162. That is, once the synchronization event occurs, each of the wireless transceivers 120L/120R is aware of the precise operational timing of the other. In accordance with embodiments presented, this time base for the binaural wireless link 162 is used to establish a “synchronized spectral analysis time base” at cochlear implants 102L and 102R (e.g., create a “tick” counter or similar scheme at both cochlear implants). In certain embodiments, the synchronized spectral analysis time base can indicate, or be used to determine, a “time offset” or “operational time difference “between the two implants. In certain examples, the synchronized spectral analysis time base is accurate to a closest audio sample at the audio sample rate of the cochlear implants 102L and 102R.
In accordance with embodiments presented herein, the synchronized spectral analysis time base is used at the processing module 124L (e.g., by the spectral analysis synchronization logic 128L) and the processing module 124R (e.g., by the spectral analysis synchronization logic 128R) for binaural synchronization of the spectral analysis operations. More specifically, as shown in
As noted above, “spectral analysis” refers to a process to determine the frequency contents of received time domain sound signals (e.g., convert time domain signals into the frequency domain). In hearing devices, such as cochlear implants 102L and 102R, spectral analysis is performed repeatedly, where each spectral analysis run/iteration converts a different group/subset (buffer) of audio samples into the frequency domain. That is, a spectral analysis is performed on only a subset of the received audio samples collected during a previous period of time, for example, on audio samples collected in a buffer. For example, audio sampling is performed at a certain rate, such as at a rate of 20 kilohertz (kHz) (i.e., 20,000 audio samples a second), while spectral analysis is performed at a lower rate, such as at a rate of 1 kHz (i.e., 1,000 times a second). This means that, in this illustrative example, a new spectral analysis is started for each 20 audio samples that are received.
As noted above, the embodiments presented herein are used to ensure binaural alignment of the spectral analysis at each of the cochlear implants 102L and 102R. That is, the techniques presented herein enable the cochlear implant 102L and cochlear implant 102R to perform spectral analysis at the same time, meaning the resulting frequency domain signals will correspond to “contemporaneous audio content.” As used herein, the “contemporaneous audio content” refers to audio samples that represent/correspond to sounds captured/received during the same period of time (e.g., capture audio samples at each of the cochlear implants 102L and 102R that represent sounds received during the same time period by the corresponding sound input elements).
For example, cochlear implant 102L collects different buffers 152L-(1)-152L-(N) of the audio samples 150L for spectral analysis, while cochlear implant 102R collects different buffers 152R-(1)-152R-(N) of the audio samples 150R for spectral analysis. As shown in
However, as shown in
As noted, if the capture of buffers 152L and 152R is started at the same time, the buffers 152L and 152R, when filled, will include contemporaneous audio content. As a result, the spectral analysis performed by the cochlear implants 102L and 102R on corresponding filled buffers will also occur at the same time, and will include contemporaneous audio content. Stated differently, the output of the spectral analysis at each of the cochlear implants 102L and 102R will be aligned in both time and in terms of contemporaneous audio content.
The cochlear implant 102L and cochlear implant 102R can determine the synchronization timing of the binaural wireless link 162 in any of a number of different manners. As noted above, in certain embodiments, a synchronization event/notification is generated (e.g., at the wireless transceivers 120L and 120R) indicating the synchronization timing of the binaural wireless link 162, and the cochlear implants 102L and 102R. However, this technique is merely illustrative and other techniques are possible. In one form, the synchronization event/notification is generated when the binaural wireless link 162 is established and the synchronization event delivers a relative time stamp to each cochlear implant 102L and 102R.
In certain embodiments, the synchronization event indicates, or is used to determine, an operational time difference between the cochlear implants 102L and 102R. For example, in the arrangement of
As noted, the above process aligns/synchronizes the capture of the buffers 152L and 152R at cochlear implants 102L and 102R. In the cochlear implant system 100, this could be of benefit for localization as each spectral analysis (filter-bank analysis) period leads to a selection of maxima (channels to stimulate), and as long as timing in the further parts of the signal path and the implant interface is maintained, these will be delivered to the cochlea at the same time, and represent the same portion of input audio on both cochlear implants 102L and 102R.
More specifically,
In the arrangement of
More specifically, in the example of
In accordance with embodiments presented herein, the synchronized spectral analysis time base is used at the external component 304L (e.g., by spectral analysis synchronization logic) and the external component 304R (e.g., by spectral analysis synchronization logic) for binaural synchronization of the spectral analysis operations performed at the external components. More specifically, as shown in
As noted above, spectral analysis refers to a process to determine the frequency contents of received time domain sound signals. The spectral analysis is performed on a subset of the received audio samples collected, for example, in a buffer. To ensure binaural alignment of the spectral analysis, the synchronized spectral analysis time base is used to synchronize the capture of buffers at each of the external components 304L and 304R. For example, external component 304L collects different buffers 352L-(1)-352L-(N) of the audio samples 350L for spectral analysis, while external component 304L collects different buffers 352R-(1)-352R-(N) of the audio samples 350R for spectral analysis.
In the embodiment of
In the example of
In the embodiment of
As noted above, spectral analysis refers to a process to determine the frequency contents of received time domain sound signals. The spectral analysis is performed on a subset of the received audio samples collected, for example, in a buffer. To ensure binaural alignment of the spectral analysis, the synchronized spectral analysis time base is also used to synchronize the capture buffers at each of the implantable components 312L and 312R. For example, implantable component 312L collects different buffers 362L-(1)-362L-(N) of the audio samples 358L for spectral analysis, while implantable component 312R collects different buffers 362R-(1)-362R-(N) of the audio samples 358R for spectral analysis.
In certain embodiments, the synchronized time base indicates, or is used to determine, that the implantable component 312L lags implantable component 312R. As such, in this example, the operations of implantable component 312L and/or implantable component 312R are adjusted to align the buffer capture at each device. These adjustments include not only the time that the audio capture is initiated, but also the actual audio content samples that are captured. That is, using the synchronized time base, the implantable components 312L and 312R can determine what specific time the other device will capture the buffers. Accordingly, the implantable component 312L and/or the implantable component 312R adjusts operation so that the capture of buffer 362R-(1) will begin at the same time as that of buffer 362L-(1). If this capture occurs at the same time and with sufficient accuracy, then buffer 362R-(1) will begin at the same time as that of buffer 362L-(1) and will include audio samples that are associated with contemporaneous audio content.
As noted, the above process aligns/synchronizes the capture of the buffers 352L and 352R, as well as 362L and 362L, at the cochlear implants 302L and 302R. In the cochlear implant system 300, this could be of benefit for localization as each spectral analysis (filter-bank analysis) period leads to a selection of maxima (channels to stimulate), and as long as timing in the further parts of the signal path and the implant interface is maintained, these will be delivered to the cochlea at the same time, and will represent contemporaneous portions of input audio on both cochlear implants 302L and 302R.
As noted,
In accordance with certain examples presented herein, the buffers are generally “data windows” or “data buffers” for use in a spectral analysis process, such as a fast Fourier transform (FFT) or similar filter-bank.
More specifically,
In the example of
In general, if the buffer (FFT data window) alignment is very different across the hearing devices 402L and 402L before the synchronization or realignment event, then a portion of audio samples can be discarded (leaving a small gap in the sound) or a portion of the previous data can be overwritten with a new calculation. Alternatively, a smoothing operation or interpolation operation can be used between alignment events, if there is a need to avoid any discontinuities in the sound. However, given that FFT calculations are often overlapped, then any shift in the overlap point (due to the synchronization event) is likely to be minor in terms of effects on the sound presented to the cochlear implant recipient.
More specifically, in practice, the wireless synchronization event will be communicated to the processor cores 425L and 425R with some delay (assuming that there is a wireless chip/sub-system separate from the sound processor core(s)). One solution is that a time counter can be started on the wireless sub-system when the synchronization event occurs. An amount of time, indicated by ‘A’ in
Merely for case of description, the techniques presented herein have primarily described herein with reference to an illustrative medical device system, namely a cochlear implant system that delivers electrical stimulation to both ears of a recipient. However, it is to be appreciated that the techniques presented herein may also be used with a variety of other medical devices that, while providing a wide range of therapeutic benefits to recipients, patients, or other users, may benefit from the techniques presented. For example, a cochlear implant system in accordance with embodiments presented herein may also deliver acoustic stimulation to one or both ears of the recipient (e.g., one or more of the cochlear implants is an electro-acoustic cochlear implant). It is also to be appreciated that the two cochlear implants of a cochlear implant system in accordance with embodiments presented need not be identical with respect to, for example, the number of electrodes used to electrically stimulate the cochlea, the type of stimulation delivered, etc.
Furthermore, it is to be appreciated that the techniques presented herein may be used with other systems including two or more devices, such as systems including one or more personal sound amplification products (PSAPs), one or more acoustic hearing aids, one or more bone conduction devices, one or more middle ear auditory prostheses, one or more direct acoustic stimulators, one or more other electrically simulating auditory prostheses (e.g., auditory brain stimulators), one or more vestibular devices (e.g., vestibular implants), one or more visual devices (i.e., bionic eyes), one or more sensors, one or more pacemakers, one or more drug delivery systems, one or more defibrillators, one or more functional electrical stimulation devices, one or more catheters, one or more seizure devices (e.g., devices for monitoring and/or treating epileptic events), one or more sleep apnea devices, one or more electroporation devices, one or more remote microphone devices, one or more consumer electronic devices, etc. For example,
More specifically,
As previously described, the technology disclosed herein can be applied in any of a variety of circumstances and with a variety of different devices. While the above-noted disclosure has been described with reference to medical device, the technology disclosed herein may be applied to other electronic devices that are not medical devices. For example, this technology may be applied to, e.g., ankle or wrist bracelets connected to a home detention electronic monitoring system, or any other chargeable electronic device worn by a user.
As should be appreciated, while particular uses of the technology have been illustrated and discussed above, the disclosed technology can be used with a variety of devices in accordance with many examples of the technology. The above discussion is not meant to suggest that the disclosed technology is only suitable for implementation within systems akin to that illustrated in the figures. In general, additional configurations can be used to practice the processes and systems herein and/or some aspects described can be excluded without departing from the processes and systems disclosed herein.
This disclosure described some aspects of the present technology with reference to the accompanying drawings, in which only some of the possible aspects were shown. Other aspects can, however, be embodied in many different forms and should not be construed as limited to the aspects set forth herein. Rather, these aspects were provided so that this disclosure was thorough and complete and fully conveyed the scope of the possible aspects to those skilled in the art.
As should be appreciated, the various aspects (e.g., portions, components, etc.) described with respect to the figures herein are not intended to limit the systems and processes to the particular aspects described. Accordingly, additional configurations can be used to practice the methods and systems herein and/or some aspects described can be excluded without departing from the methods and systems disclosed herein.
According to certain aspects, systems and non-transitory computer readable storage media are provided. The systems are configured with hardware configured to execute operations analogous to the methods of the present disclosure. The one or more non-transitory computer readable storage media comprise instructions that, when executed by one or more processors, cause the one or more processors to execute operations analogous to the methods of the present disclosure.
Similarly, where steps of a process are disclosed, those steps are described for purposes of illustrating the present methods and systems and are not intended to limit the disclosure to a particular sequence of steps. For example, the steps can be performed in differing order, two or more steps can be performed concurrently, additional steps can be performed, and disclosed steps can be excluded without departing from the present disclosure. Further, the disclosed processes can be repeated.
Although specific aspects were described herein, the scope of the technology is not limited to those specific aspects. One skilled in the art will recognize other aspects or improvements that are within the scope of the present technology. Therefore, the specific structure, acts, or media are disclosed only as illustrative aspects. The scope of the technology is defined by the following claims and any equivalents therein.
It is also to be appreciated that the embodiments presented herein are not mutually exclusive and that the various embodiments may be combined with another in any of a number of different manners.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/IB2023/051576 | 2/21/2023 | WO |
Number | Date | Country | |
---|---|---|---|
63314674 | Feb 2022 | US |