The present disclosure relates to neural stimulation, and in particular, noninvasive neural stimulation using audio.
For decades, neuroscientists have observed wave-like activity in the brain called neural oscillations. Various aspects of these oscillations have been related to attentional states. The ability to influence attentional states, via noninvasive brain stimulation, would be greatly desirable.
Overview
The present disclosure is directed to methods of neural stimulation with any audio. Example embodiments provide a neuroscience-informed way to select for audio components which, when combined with modulated audio components, create an audio arrangement which will stimulate the brain in a noninvasive way.
According to example embodiments of the present application, first data comprising a first range of audio frequencies is received. The first range of audio frequencies corresponds to a predetermined cochlear region of a listener. Second data comprising a second range of audio frequencies is also received. Third data comprising a first modulated range of audio frequencies is acquired. The third data is acquired by modulating the first range of audio frequencies according to a stimulation protocol that is configured to provide neural stimulation of a brain of the listener. The second data and the third data are arranged to generate an audio composition from the second data and the third data.
Example Embodiments
Described herein are techniques that provide for non-invasive neural stimulation of the brain. For example, the techniques of the present application utilize modulation of audio elements (e.g., amplitude modulation or volume modulation) to provide stimulus to stimulate the brain. The concept behind this stimulation may be analogized to the way in which unsynchronized metronomes arranged on a table will synchronize due to constructive and destructive interference of the energy transferred between the metronomes via the platform on which they are arranged.
As illustrated in
The present disclosure provides methods, apparatuses and computer executable media configured to provide such neural stimulation via audio elements. As used herein, “audio element” refers to a single audio input, usually a single digital file, but also could be an audio feed from a live recording. As further explained below, the techniques may be particularly effective when the audio stimulation is provided by predetermined frequencies that are associated with known portions of the cochlea of the human ear. Furthermore, the techniques of the present application provide for the selection of the waveforms configured to target specific areas of the brain.
With reference now made to
The process flow 200 of
The process flow 200 begins with an audio element or elements 202. An audio element 202 may be embodied as a live recording, pre-composed music files, audio with no music at all, or a combination of elements from all three. To achieve better brain stimulation, a wide spectrum of sound may be used, as opposed to just a single tone or a several tones. Accordingly, audio elements 202 may be selected such that the combination of audio elements have a large spectral audio profile—in other words, audio elements 202 are selected such that the combination of the audio elements has many frequency components. For example, one or more of audio elements 202 may be selected from music composed from many instruments with timbre that produces overtones all across the spectral profile.
Furthermore, the audio elements 202 may be selected to ensure both a large number of frequencies are being modulated, and also ensuring that unmodulated frequency regions are also included so that a listener is not disturbed by the modulations giving rise to the brain stimulations. For example, according to the techniques described herein, a band pass filter may be used to extract a frequency region, such as 400 Hz to 900 Hz, from an audio element, while a band stop filter may be used to generate a signal with all but the 400 Hz to 900 Hz frequency range. This extraction would result in one audio element file with only this frequency region and one audio element file without it. A “band pass filter” is a device or process that passes frequencies within a certain range and rejects frequencies outside that range, while a “band stop filter,” also called a notch filter, t-notch filter, band-elimination filter, and band-rejection filter, is a conventional audio process that passes most frequencies unaltered, but attenuates those in a range to very low levels.
Illustrated in
Returning to
Specifically, spectral analyzer 210 analyzes the frequency components of each audio element 202. If it is determined that one or more of audio elements 202 are composed of a large variety of frequency components across the spectrum, the one or more audio elements 202 are sent to the filter queue 211. As its name implies, the filter queue 211 is a queue for audio filter 230. Because the stimulation protocol 260 may be applied to a specific frequency or a relatively narrow range of frequencies, audio elements 202 that contain a large variety of frequency components undergo filtering in operation 230 to separate these large varieties of frequency components. For example, audio elements that contain audio from a plurality of instruments may contain audio data with frequency components that cross the audible frequency spectrum. Because the stimulation protocol 260 will only be applied to a subset of these frequencies, such audio elements are sent to audio filter 230. In other words, the filtering of operation 230 selects a frequency range from an audio element for modulation.
If it is determined that one or more of audio elements 202 has a single frequency component, or multiple frequency components but centered around a narrow band, the one or more audio elements 202 are sent to unfiltered queue 212. In other words, if the audio element 260 covers a sufficiently narrow frequency range, the stimulation protocol 260 may be applied to the entire audio element, and therefore, no further filtering would be required. Accordingly, such audio elements are sent to audio separator 232. Audio separator 232 looks at the spectral data of an audio input and pairs it with a cochlear profile to determine if the audio input should be modulated or not.
Additionally, spectral data may be sent from spectral analyzer to one or more of audio filter 230 and audio separator 232. This spectral data may be used, for example, in conjunction with cochlear profile 231, to determine which portions of the audio elements 202 are to be modulated according to stimulation protocol 260.
Both audio filter 230 and audio separator 232 are configured to filter audio elements for modulation (in the case of filter 230) or select audio elements for modulation (in the case of selector 232) based upon one or more cochlear profiles 231. Cochlear profile 231 provides instructions to one or more of filters 230 and/or selector 232 based upon the frequency sensitivity of the cochlear of the human ear. According to the present example embodiment, “cochlear profile” refers to a list of frequency bands to be modulated. Frequencies not specified will be excluded from modulation.
With reference now made to
The cochlea, in addition to sensing different frequencies in different regions, also has sensitivity that varies with the region of the cochlea. Each region has a number of cochlear filters that help the brain decide what to pay attention to. Sensitive cochlear regions draw attention more than insensitive regions. For example, sound in the frequency range of a human scream will draw our attention where the same sound, reduced in pitch to bass level, may be completely overlooked. The difference in reaction is largely due to the sensitivity of different areas in the cochlea. Knowing how the cochlea and larger auditory system draw our attention enables neural stimulation to be incorporated into audio without disturbing the listener. Specifically, it has been determined that modulation targeting frequencies associated with the insensitive regions of the cochlea will stimulate the brain without disturbing the listener.
For example, by providing stimulation through the modulation of frequencies between 0 Hz-1500 Hz, the modulation may be less noticeable to the listener but the modulation may have a substantial stimulation effect on the brain. Providing modulation at frequencies higher than the 0 Hz-1500 Hz range may be avoided because the sensitivity of the cochlear regions increases dramatically for high frequencies. Similarly, the stimulation could be provided through modulation at frequencies between 8 kHz and 20 kHz, as the sensitivity of the cochlea decrease at such higher frequencies.
As a counter example, sensitive areas of the cochlea may be targeted specifically if the audio being modulated supports it without being obtrusive. For example, there are relatively insensitive regions of the cochlea between 5 kHz and 6.5 kHz, and these frequencies may be modulated in audio elements that lack significant audio components in this range. For example, audio elements created using instruments that do not make great use of that range may provide stimulation through modulation of this range.
According to other examples, audio elements created with instruments that make heavy use of a region within a usually insensitive band, such as 900-1200 Hz, may be used for brain stimulation. These special cases may be taken into account using spectral profiling, but generally avoiding highly sensitive regions is a safe, effective way to highly stimulate the brain without disturbing the listener.
As illustrated in
It has been determined that neural stimulation targeting insensitive regions (i.e., stimulation protocols that modulate high and low frequency sounds) will stimulate the brain without disturbing the listener. For example, stimulation protocols associated with these relatively low sensitivity regions will achieve the entrainment described above with reference to
Further, it has been determined that modulation of both low and high frequencies has a special effect on the brain. If both regions have identical modulation, the brain fuses the two regions into a single virtual source, increasing the fidelity of the stimulation waveform. Therefore, by avoiding sensitive cochlear regions while targeting both high and low regions, the fidelity of the stimulation waveform may be increased without disturbing the listener. For example, a piece of audio could be modulated using frequencies between 0-1500 Hz and frequencies between 8 kHz-20 kHz. The modulation of the two frequency regions may be substantially identical in waveform, phase and rate. Such modulation may create increased waveform fidelity for both ranges of stimulation.
Fidelity of the waveform is analogous to the difference in resolution of digital images: a low resolution image has less pixels, and thus will not be able to capture as much detail as a high resolution image with more pixels. In the same way, high frequency carriers are able to increase the fidelity of a modulation waveform. Depicted in
The fidelity of the stimulation waveform may have a significant impact on the effectiveness of the neural stimulation provided by the modulation. Just as in audio synthesis, neural oscillations will react differently depending on the waveform of the modulation.
Returning to
Returning to
For example, audio filter 230 may receive instructions from the cochlear profile 231 for each audio element being filtered. These instructions may indicate which frequency range within the audio element are to be modulated, for example the frequencies corresponding to the less sensitive portions of the human cochlea. In carrying out this operation, audio filter 230 may use one or more band passes to extract the chosen frequency components for modulation 240. Accordingly to other example embodiments, band stop filters, equalizers, or other audio processing elements known to the skilled artisan may be used in conjunction with or as an alternative to the band pass filter to separate the contents of filter queue 211 into frequency components for modulation 240 and frequency components that will not receive modulation 242.
The frequency components for modulation 240 are passed to modulator 250 in accordance with the frequencies indicated in cochlear profiles 231. The remainder of the frequency components 242 are passed directly to the mixer 251 where modulated and unmodulated frequency components are recombined to form a single audio element 252. This process is done for each audio element in the filter queue 211.
Similarly, audio separator 232 may receive instructions from the cochlear profile 231 selected for each audio element. Based upon the instructions provided by cochlear profile 231, audio separator 232 may separate the audio elements contained in unfiltered queue 212 into audio elements to be modulated 243 and audio elements not to be modulated 244. By placing an audio element into audio elements to modulate 243, audio separator 232 selects a frequency range comprising the entirety of the audio element for modulation. Accordingly, the audio elements to be modulated 243 are sent to modulator 250, while the audio elements not to be modulated are sent to audio arranger 253, where these audio elements will be arranged with audio elements that contain modulation to form a final combined audio element.
As illustrated in
Turning to
The rate of the stimulation 620 may be established such that the modulation provided by the stimulation protocol synchronizes amplitude modulation of the audio elements being modulated to rhythms in the underlying audio elements. The stimulation protocol may adjust the modulation phase to align with rhythmic acoustic events in the audio. By aligning the modulation with the acoustic events in the audio elements being modulated, the stimulation protocol may be generated to ensure that the stimulation provided by the modulator is not interfered with by the underlying audio elements being modulated, and vice versa, keeps the stimulating modulation from interfering with the underlying music. Rhythmic acoustic events such as drum beats in music, or waves in a beach recording, are perceived in the brain as a form of amplitude modulation. If the modulation provided by the stimulation protocol is not aligned with these rhythmic acoustic events of the audio elements being modulated, the rhythmic acoustic events could interfere with the stimulation modulation. This misalignment would create interference between the rhythmic elements of the audio elements and the amplitude modulations meant to stimulate the brain. Accordingly, it may be beneficial to synchronize the stimulation protocol modulation with the rhythmic elements of the audio element being modulated.
Furthermore, synchronizing the stimulation protocol modulation with the rhythmic elements of the audio element being modulated prevents distortion of the audio by allowing the modulation cycle crest to align with the crest of notes or beats in music. For example, music at 120 beats per minute equates to 2 beats a second, equivalent to 2 Hz modulation. Quarter notes would align with 2 Hz modulation if the phase is correct. 8th notes would align at 4 Hz, 32nd notes would align with 16 Hz. If a stimulation protocol is being applied to music in an MP3 which plays at 120 beats per minute (BPM), the stimulation protocol would want to modulate the audio elements of the music file at 2 Hz. Specifically, “hertz” refers to a number of cycles per second, so 2 Hz corresponds to 120 BPM, as a 120 BPM piece or music will have two beats every second. Similarly, the rate of modulation may be set as a multiple of BPM for the audio element.
Illustrated in
The frequency of modulation signal 710 of
In order to ensure that the stimulation protocol aligns with the rhythmic elements of the audio elements being modulated, the phases of the stimulation modulation and the rhythmic elements of the audio element may be aligned. Returning to the example of the 120 BPM MP3 file, applying 2 Hz modulation to the MP3 file may not align with the rhythmic elements of the MP3 file if the phase of the stimulation modulation is not aligned with MP3 file. For example, if the maxima of the stimulation modulation is not aligned with the drum beats in the MP3 file, the drum beats would interfere with the stimulation modulation, and the stimulation protocol may cause audio distortion even through the stimulation modulation is being applied with a frequency that matches the rate of a 2 BPM audio element.
Such distortion may be introduced because, for example, MP3 encoding often adds silence to the beginning of the encoded audio file. Accordingly, the encoded music would start later than beginning of the audio file. If the encoded music begins 250 milliseconds after the beginning of the encoded MP3 file, stimulation modulation that is applied at 2 Hz starting at the very beginning of the MP3 file will be 180° out of phase with the rhythmic components of the MP3 file. In order to synchronize the modulations to the beats in the file, the phase of the modulation would have to be shifted by 180°. If the phase of the modulation is adjusted by 180°, the modulation cycle will synchronize with the first beat of the encoded music.
In order to ensure that the stimulation modulation aligns with the rhythmic elements of the audio elements being modulated, the audio elements are provided to a beat detector, an example of which is illustrated as beat detector 220 of
According to other example embodiments, rhythm detector 220 may be configured to analyze the content of audio elements to determine information such as the phase and BPM of audio element 202. For example, according to one specific example embodiment, five musical pieces would be selected, and each musical piece would be a WAV file, six minutes long. Beat detector 220 may determine that each of the musical pieces has a BPM of 120. Beat detector 220 may further determine that each musical piece starts immediately, and therefore, each musical piece has a starting phase of 0. According to other examples, beat detector 220 may determine that each musical piece has a silent portion prior to the start of the musical piece, such as the 250 millisecond delay provided by some MP3 encoding. Beat detector 220 may detect this delay, and convert the time delay into a phase shift of the rhythmic elements of the music based upon the BPM of the musical piece. As illustrated in
Returning to
As also illustrated in
The concept of modulation depth is illustrated in
Returning to
The depth 640 may also specified across the duration of the stimulation by moving points 612a-d via user input. Finally, a save button 690 is provided to save the protocol to an electronic medium.
Returning again to
In music, overtones contribute to timbre—the way a piano and a guitar, playing the same fundamental frequency, will sound completely different from one another. Brain imaging data has also shown that complex waveforms delivered with waveforms that contain overtones result in broader stimulation of neural oscillatory overtones far past the range of stimulation. Accordingly, “neural oscillatory overtones” refer to resonant frequencies above the fundamental frequency of stimulation. Like audio, or any data with a time-series, neural oscillations show harmonic and overtone frequencies when analyzing the spectrum and the fundamental frequency of stimulation.
With reference now made to
Brain imaging data has shown that neural stimulation based upon complex waveforms results in broader stimulation of neural oscillatory overtones far past the range of stimulation due to the presence of overtones, such as the spikes 940a-c of
Waveform protocol 259 of
With reference now made to
Specifically, waveform 1030 is configured to enhance neural stimulation of the frontal cortex, and therefore, is shaped to mimic the shape of frontal cortex oscillations 1010. Accordingly, waveform 1030 is provided with a relatively smooth shape, in this case, a shape similar to that of a sine wave. Waveform 1040 is configured to enhance neural stimulation of the motor cortex, and therefore, is shaped to mimic the “M”-like shape of motor cortex oscillations 1020.
If a user decides to generate a stimulation protocol to help ease anxiety by stimulating 10 Hz in the frontal regions of the brain, a stimulation protocol may be generated to use the frontal waveform 1030 at a rate of 10 Hz. The modulation could be applied to one or more audio files, and played for the user. This process would be much more effective than using a single modulation waveform for all purposes.
Waveform protocol 259 of
Returning to
The stimulation protocol 260 specifies the duration of the auditory stimulation, as well as the desired stimulation across that timeframe. To control the stimulation, it continually instructs the modulator 250 as to the rate, depth, waveform and phase of the modulations. As described above, the stimulation protocol 260 may instruct modulator 250 based upon the output of beat detector 220 to ensure the rates are multiples or factors of the BPM measured by rhythmic content in the audio elements 202. As also described above, a modulation waveform may be specified in the waveform protocol 259, and is used to effect neural oscillatory overtones and/or to target specific brain regions, which is provided to modulator 250 via stimulation protocol 260. Finally, modulation phase control of modulator 250 may be provided by stimulation protocol 260 based upon beat detector 220 to ensure the phase of modulation matches the phase of rhythmic content in the audio elements 202. Modulation depth control is used to manipulate the intensity of the stimulation.
The modulator 250 may use a low-frequency oscillator according to the stimulation protocol 260, which contains ongoing rate, phase, depth, and waveform instruction. Low frequency oscillation (LFO) is a technique where an additional oscillator, that operates at a lower frequency that the signal being modulated, modulates the audio signal, thus causing a difference to be heard in the signal without the actual introduction of another sound source. LFO is commonly used by electronic musicians to add vibrato or various effects to a melody. In this case it is used to modulate the amplitude, frequency, stereo panning or filters according to the stimulation protocol 260.
The modulator 250 is used to modulate frequency components 240 and unfiltered audio elements 243. Frequency components 240 are modulated and then mixed with their counterpart unmodulated components 242 in mixer 251 to produce final filtered, modulated audio elements 252, which are then sent to the audio arranger 253. Audio elements 243, on the other hand, are modulated in full, so they need not be remixed, and are therefore sent directly to the audio arranger 253.
An “audio arranger” is a device or process that allows a user to define a number of audio components to fill an audio composition with music wherever the score has no implicit notes. Accordingly, audio arranger 253 arranges all audio content across the timeline of the stimulation protocol 260. As illustrated in
Illustrated in
Returning to
With reference now made to
In operation 1310, third data is acquired that corresponds to a first modulated range of audio frequencies. The third data is acquired by modulating the first range of audio frequencies according to a stimulation protocol configured to provide neural stimulation of a brain of a listener. For example, operation 1310 may include the modulation by modulator 250 of frequency components to modulate 240 and/or audio elements to modulate 243 according to stimulation protocol 260, as illustrated in
In operation 1320, the second data and third data are arranged to generate an audio composition from the second data and the third data. For example, operation 1320 may include the operations carried out by mixer 251 and/or audio arranger 253 of
As depicted, the device 1400 includes a bus 1412, which provides communications between computer processor(s) 1414, memory 1416, persistent storage 1418, communications unit 1420, and input/output (I/O) interface(s) 1422. Bus 1412 can be implemented with any architecture designed for passing data and/or control information between processors (such as microprocessors, communications and network processors, etc.), system memory, peripheral devices, and any other hardware components within a system. For example, bus 1412 can be implemented with one or more buses.
Memory 1416 and persistent storage 1418 are computer readable storage media. In the depicted embodiment, memory 1416 includes random access memory (RAM) 1424 and cache memory 1426. In general, memory 1416 can include any suitable volatile or non-volatile computer readable storage media. Instructions for the “Neural Stimulation Control Logic” may be stored in memory 1416 or memory 1418 for execution by processor(s) 1414. The Neural Stimulation Control Logic stored in memory 1416 or memory 1418 may implement the noninvasive neural stimulation through audio techniques of the present application.
One or more programs may be stored in persistent storage 1418 for execution by one or more of the respective computer processors 1414 via one or more memories of memory 1416. The persistent storage 1418 may be a magnetic hard disk drive, a solid state hard drive, a semiconductor storage device, read-only memory (ROM), erasable programmable read-only memory (EPROM), flash memory, or any other computer readable storage media that is capable of storing program instructions or digital information.
The media used by persistent storage 1418 may also be removable. For example, a removable hard drive may be used for persistent storage 1418. Other examples include optical and magnetic disks, thumb drives, and smart cards that are inserted into a drive for transfer onto another computer readable storage medium that is also part of persistent storage 1418.
Communications unit 1420, in these examples, provides for communications with other data processing systems or devices. In these examples, communications unit 1420 includes one or more network interface cards. Communications unit 1420 may provide communications through the use of either or both physical and wireless communications links.
The above description is intended by way of example only. Although the techniques are illustrated and described herein as embodied in one or more specific examples, it is nevertheless not intended to be limited to the details shown, since various modifications and structural changes may be made within the scope and range of equivalents of the claims.
This application is a Continuation, which claims priority from U.S. patent application Ser. No. 16/276,961 filed Feb. 15, 2019. The entirety of the above-listed application is incorporated herein by reference.
Number | Name | Date | Kind |
---|---|---|---|
3712292 | Zentmeyer | Jan 1973 | A |
4141344 | Barbara | Feb 1979 | A |
4191175 | Nagle | Mar 1980 | A |
4227516 | Meland et al. | Oct 1980 | A |
4315502 | Gorges | Feb 1982 | A |
4777529 | Schultz et al. | Oct 1988 | A |
5135468 | Meissner | Aug 1992 | A |
5213562 | Monroe | May 1993 | A |
5289438 | Gall | Feb 1994 | A |
5586967 | Davis | Dec 1996 | A |
7674224 | Hewett | Mar 2010 | B2 |
20010049480 | John et al. | Dec 2001 | A1 |
20060080087 | Vandali et al. | Apr 2006 | A1 |
20070084473 | Hewett | Apr 2007 | A1 |
20070291958 | Jehan | Dec 2007 | A1 |
20100005686 | Russell | Jan 2010 | A1 |
20100056854 | Chang | Mar 2010 | A1 |
20100188580 | Paschalakis et al. | Jul 2010 | A1 |
20120006183 | Humphrey | Jan 2012 | A1 |
20120008809 | Vandali et al. | Jan 2012 | A1 |
20120251989 | Wemore | Oct 2012 | A1 |
20130046546 | Uhle | Feb 2013 | A1 |
20130216055 | Wanca | Aug 2013 | A1 |
20140223462 | Aimone et al. | Jul 2014 | A1 |
20140307878 | Osborne et al. | Oct 2014 | A1 |
20150016613 | Atwater et al. | Jan 2015 | A1 |
20150199010 | Coleman et al. | Jul 2015 | A1 |
20150297109 | Garten et al. | Oct 2015 | A1 |
20150351655 | Coleman | Dec 2015 | A1 |
20160055842 | DeFranks et al. | Feb 2016 | A1 |
20160143554 | Lim et al. | May 2016 | A1 |
20160372095 | Lyske | Dec 2016 | A1 |
20170024615 | Allen et al. | Jan 2017 | A1 |
20170339484 | Kim | Nov 2017 | A1 |
20170371961 | Douglas | Dec 2017 | A1 |
20180027347 | Osborne et al. | Jan 2018 | A1 |
20180133431 | Malchano et al. | May 2018 | A1 |
20180133504 | Malchano et al. | May 2018 | A1 |
20180315452 | Shi et al. | Nov 2018 | A1 |
20190022351 | McCarthy et al. | Jan 2019 | A1 |
20190255350 | Malchano et al. | Aug 2019 | A1 |
20190269936 | Malchano et al. | Sep 2019 | A1 |
20190314641 | Malchano et al. | Oct 2019 | A1 |
20200074982 | McCallum | Mar 2020 | A1 |
20200197657 | McCarthy et al. | Jun 2020 | A1 |
20200213790 | Osborne et al. | Jul 2020 | A1 |
20210121713 | Malchano et al. | Apr 2021 | A1 |
20210339043 | Malchano et al. | Nov 2021 | A1 |
20220285006 | Osborne et al. | Sep 2022 | A1 |
Number | Date | Country |
---|---|---|
2005-87572 | Apr 2005 | JP |
2012168740 | Dec 2012 | WO |
2014083375 | Jun 2014 | WO |
Entry |
---|
Greenburg et al. “The Modulation Spectrogram: In Pursuit of an Invariant Representation of Speech”, Presented at ICASSP-97, vol. 3, pp. 1647-1650. |
Moritz, et al. “An Auditory Inspired Amplitude Modulation Filter Bank for Robust Feature Extraction in Automatic Speech Recognition”, IEEE/ACM Transactions on Audio, Speech, and Language Processing, vol. 23, No. 11, Nov. 2015. |
Singh et al. Modulation Spectra of Natural Sounds and Ethological Theories of Auditory Processing, J. Acoust. Soc. Am, 114 (6), Pt. 1, pp. 3394-3411, Dec. 2003. |
International Search Report issued in International Application No. PCT/US2020/017788 dated Apr. 29, 2020, 2 pages. |
Written Opinion issued in International Application No. PCT/US2020/017788 dated Apr. 29, 2020, 5 pages. |
Salt, et al., “Longitudinal endolymph movements and endocochlear potential changes induced by stimulation at infrasonic frequencies,” The Journal of the Acoustical Society of America 106.2, Aug. 1999, pp. 847-856. |
Pitton, et al., “Time-frequency analysis and auditory modeling for automatic recognition of speech,” Proceedings of the IEEE 84.9, Sep. 1996, pp. 1199-1215. |
Number | Date | Country | |
---|---|---|---|
20220114998 A1 | Apr 2022 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 16276961 | Feb 2019 | US |
Child | 17556623 | US |