Embodiments of the present disclosure are related to an assistance device for a music accompaniment and method thereof, and more particularly, are related to an intelligent accompaniment generating system and method for assisting a user to play an instrument in a system.
Due to the development of technology and the advancement of computing technology, a musical instrument having a built-in ADC can convert an analog audio to a digitized signal for processing nowadays. Generally, a musical melody and its accompaniment need musicians to cooperate with each other to play, or a singer sings the main melody and the accompaniment is played by the other musicians. With the assistance of at least one of digitized software and hardware, a user need only play a melody, and its accompaniment can be generated accordingly.
However, the musical accompaniment generated will be stiff or dull without changes, and it can only repeat the notes and melodies that it was given i.e., if the user only plays a few notes, the accompaniment generated will merely corresponds to those notes.
In addition, when the user tries to learn or imitate the accompaniment listened to on a website, the user may like to know a chord information and the effect settings that the digitized software or hardware is applying to the instrument, so that the user can learn the technique for playing the original accompaniment efficiently and precisely.
Therefore, it is expected that a device, a system or a method that can provide solutions to the abovementioned insufficiencies would have commercial potential.
In view of the drawbacks in the above-mentioned prior art, the present invention proposes an intelligent accompaniment generating system and method for assisting a user to play an instrument in a system.
The system can be a cloud system including various electronic devices to communicate with each other, and the electronic devices can convert an acoustic audio signal into digitized data, and transfer the digitized data to the cloud system for analyzing. For example, the electronic devices include a mobile device, a musical equipment and a computing device. By means of machine learning, deep learning, big data, a set of audio feature analysis, the cloud system can analyze these data, generates at least one of a visual and an audio assistance information for the user by using at least one of a database generation method, a rule base generation method and a machine learning generation algorithm (or an artificial intelligence (AI) method), wherein the accompaniment includes at least one of a beat pattern and a chord pattern.
In accordance with one embodiment of the present disclosure, an intelligent accompaniment generating system is provided. The intelligent accompaniment generating system includes an input module, an analysis module, a generation module and a musical equipment. The input module is configured to receive a musical pattern signal derived from a raw signal. The analysis module is configured to analyze the musical pattern signal to extract a set of audio features, wherein the input module is configured to transmit the musical pattern signal to the analysis module. The generation module is configured to obtain a playing assistance information having an accompaniment pattern from the analysis module, wherein the accompaniment pattern has at least two parts having different onsets therebetween, and each onsets of the at least two parts is generated by an algorithm according to the set of audio features. The musical equipment includes a digital amplifier configured to output an accompaniment signal according to the accompaniment pattern.
In accordance with another embodiment of the present disclosure, a method for assisting a user to play an instrument in a system is provided. The system includes an input module, an analysis module, a generating module, an output module and a musical equipment having a computing unit, a digital amplifier and a speaker. The method includes steps of: receiving an instrument signal by the input module; analyzing an audio signal to extract a set of audio features by the analysis module, wherein the audio signal includes one of the instrument signal and a musical signal from a resource; generating a playing assistance information according to the set of audio features by the generating module; processing the instrument signal with a DSP algorithm to simulate amps and effects of bass or guitar on the instrument signal to form a processed instrument signal by the computing unit; amplifying the processed instrument signal by the digital amplifier; amplifying at least one of the processed instrument signal and the musical signal by the speaker; and outputting the playing assistance information by the output module to the user.
In accordance with a further embodiment of the present disclosure, a method for assisting a user to play an instrument in an accompaniment generating system is provided. The accompaniment generating system includes a cloud system. The method includes steps of: receiving a musical pattern signal derived from a raw signal; analyzing the musical pattern signal to extract a set of audio features; generating an accompaniment pattern in the cloud system according to the set of audio features; obtaining a playing assistance information including the accompaniment pattern from the cloud system; obtaining an accompaniment signal according to the accompaniment pattern; amplifying the accompaniment signal by a digital amplifier; and outputting the amplified accompaniment signal by a speaker.
The above embodiments and advantages of the present invention will become more readily apparent to those ordinarily skilled in the art after reviewing the following detailed descriptions and accompanying drawings:
Please refer to all Figs. of the present invention when reading the following detailed description, wherein all Figs. of the present invention demonstrate different embodiments of the present invention by showing examples, and help the skilled person in the art to understand how to implement the present invention. The present examples provide sufficient embodiments to demonstrate the spirit of the present invention, each embodiment does not conflict with the others, and new embodiments can be implemented through an arbitrary combination thereof, i.e., the present invention is not restricted to the embodiments disclosed in the present specification.
Please refer to
Please further refer to
In any one of the embodiments of the present disclosure, the input module 101 is implemented on a mobile device MD or the musical equipment 104 for receiving the musical pattern signal SMP, and the musical equipment 104 is connected to at least one of the mobile device MD and a musical instrument MI, wherein the musical pattern signal SMP is derived from a raw signal SR of the musical instrument MI played by a user USR. The analysis module 102 and the generating module can be implemented in a cloud system 105. In some embodiments, the analysis module 102 can be implemented in the input module 101 or the musical equipment 104, and the generating module 104 can be implemented in the input module 101 or the musical equipment 104 as well. If the musical equipment 104 has a network component or module, it can record and transmit the musical pattern signal SMP to the analysis module 102 without the mobile device MD. The network component or module may carry out at least one of Bluetooth®, Wi-Fi and mobile network connections.
In any one of the embodiments of the present disclosure, the analysis module 102 obtains at least one of a beat per minute BPM and a genre information GR from the musical pattern signal SMP, or automatically detects the at least one of the bpm BPM and the genre GR of the musical pattern signal SMP by the analysis module 102. The musical pattern signal SMP is compressed into a compressed musical pattern signal with a compressed format so as to be transmitted to a cloud system 105 including the analysis module 102 and the generation module 103. The mobile device MD or the musical equipment 104 includes a timbre source database 1010, 1040, and receives the accompaniment pattern DAP to call at least one of timbre in the timbre database 1010, 1040 to play, and the at least one of timbre is sounded by the musical equipment 104.
In any one of the embodiments of the present disclosure, the analysis module 102 detects a beat per minute BPM and a time signature in the set of audio features DAF, detects a global onset of the musical pattern signal SMP to exclude a redundant sound RS before the global onset GONS, calculates a beat timing point BTP of each measure of the accompaniment pattern DAP according to the bpm BPM and the time signature TS, and the analysis module 102 determines a chord chd used in the musical pattern signal SMP and a chord timing point CTP according to the chord information CHD and a chord algorithm CHDA. The global onset GONS is a starting timing point of an entire melody played by the user USR.
In any one of the embodiments of the present disclosure, the analysis module 102 obtains the set of audio features DAF including at least one of an entropy ENP, onsets ONS, onset weights ONSW of the onsets ONS, a mel-frequency cepstral coefficients of a spectrum (mfcc), a spectral complexity, a roll off frequency of a spectrum, a spectral centroid, a spectral flatness, a spectral flux and a danceability, wherein each of the onset weights ONSW is calculated by a corresponding note volume NV and a corresponding note duration NDUR of the musical pattern signal SMP.
In any one of the embodiments of the present disclosure, the analysis module 102 calculates an average value AVG of each of the set of audio features DAF in each measure of the musical pattern signal SMP. The analysis module 102 determines the first complexity 1COMX and the first timbre 1TIMB by inputting the average value AVG into a support vector machine model SVM.
Please refer to
Please refer to
In any one of the embodiments of the present disclosure, the first, second and third part drum patterns 1DP, 2DP, 3DP can be a verse drum pattern, a chorus drum pattern and a bridge drum pattern respectively. The song structure can be any combinations of the first, second and third part drum patterns 1DP, 2DP, 3DP, and they can be repeated or continuous for the same drum pattern. Preferably, the song structure includes a specific combination of 1DP, 2DP, 3DP and 2DP.
In any one of the embodiments of the present disclosure, the accompaniment pattern DAP has a duration PDUR; and the generation module 103 is further configured to perform the followings: generate a first set of bass timing points 1BSTP according to the processed onsets PONS1 respectively in the duration PDUR; add a second set of bass timing points 2BSTP at the time point without the first set of bass timing points 1BSTP in the duration PDUR, wherein the second set of bass timing points 2BSTP is generated according to the processed bass drum onsets ONS_BD1 and the processed snare drum onsets ONS_SD1; and generate a bass pattern 1BSP having onsets on the first set of bass timing points 1BSTP and the second set of bass timing points 2BSTP, wherein the bass pattern 1BSP has notes, and pitches of the notes are determined based on a music theory with the chord information CHD. For the same token, another bass pattern 2BSP for the second part can be generated by the above similar method.
In any one of the embodiments of the present disclosure, the accompaniment pattern DAP is further obtained according to different generation types including at least one of a database type, a rule base type and a machine learning algorithm MLAG. For example, the database type is as the generation module 103 performs the above algorithm AG For example, the rule base type is as the analysis module 102 obtains at least one of a beat per minute BPM and a genre information GR for the musical pattern signal SMP when the user USR improvises some ad lib melodies. For example, by the machine learning algorithm MLAG, a trained model for generating the accompaniment DAP can be set up by inputting plural sets of onsets of an existing guitar rhythm pattern, existing drum pattern and existing bass pattern.
The present disclosure not only provides the user USR with the playing assistance information through an audio type information of accompaniment pattern DAP for playing sound signals, such as MIDI (musical instrument digital interface) information, but also provides the user USR with a visual type information for learning a song accompaniment, such as the chord indicating information ICHD. In addition, the song accompaniment may include effect settings applied to an instrument played in the existing music contents, and a mechanism used by the user USR can also be provided in the present disclosure to apply effect settings according to the existing musical contents.
Please refer to
In any one of the embodiments of the present disclosure, the system 20 includes an input module 202, an analysis module 203, a generating module 204, an output module 205 and a musical equipment 206 having a computing unit 2061, a digital amplifier 2062 and a speaker 2063, for example, the speaker 2063 is a full-range speaker. The method S20 includes steps of: Step S201, receiving an instrument signal SMI by the input module 202; Step S202, analyzing an audio signal SAU to extract a set of audio features DAF by the analysis module 203, wherein the audio signal SAU includes one of the instrument signal SMI and a musical signal SMU from a resource 207; Step 203, generating a playing assistance information IPA according to the set of audio features DAF by the generating module 204; Step 204, processing the instrument signal with a DSP algorithm DSPAG to simulate amps and effects of bass or guitar on the instrument signal SMI to form a processed instrument signal SPMI by the computing unit 2061; Step 205, amplifying the processed instrument signal SPMI by the digital amplifier 2062; Step 206, amplifying at least one of the processed instrument signal SPMI and the musical signal SMU by the speaker 2063; and Step 207, outputting the playing assistance information IPA by the output module 205 to the user 200.
Please refer to
In any one of the embodiments of the present disclosure, the input module 202 includes at least one of a mobile device MD and the musical equipment 206. When the mobile device MD functions as the input module 202, it can record the instrument signal SMI, or it can capture the musical signal SMU for the resource 207. In one embodiment, when the musical equipment 206 functions as the input module 202, it may have network components for transmitting the audio signal SAU to be connected to some device or some system (for example, the system 20 in
In any one of the embodiments of the present disclosure, the method S20 further includes steps of: receiving the instrument signal SMI by the input module 202, wherein the mobile device MD is connected with the musical equipment 206, the musical equipment 206 is connected with a musical instrument 201, and the instrument signal SMI is derived from a raw signal SR of the musical instrument 201 played by a user 200; inputting at least one of a beat per minute BPM, time signature TS, and a genre information GR for the instrument signal SMI into the analysis module 203 by the user 200 or automatically detecting the at least one of the bpm BPM, time signature TS, and the genre GR of the instrument signal SMI by the analysis module 203; transmitting the instrument signal SMI to the analysis module 203; detecting a global onset GONS of the instrument signal SMI to exclude a redundant sound RS before the global onset GONS; calculating a beat timing point BTP of each measure of the beat pattern BP of the accompaniment pattern DAP according to the bpm BPM and the time signature TS; determining the chord indicating information ICHD according to the set of chord information CHD and a chord algorithm CHDA; calculating an average value AVG of each of the set of audio features DAF in each measure of the musical signal SMU and the instrument signal SMI; and detecting the first complexity 1CONPX and the first timbre 1TIMB by inputting the average value AVG into a support vector machine model SVM). The step of transmitting the instrument signal SMI to the analysis module 203 includes compressing the instrument signal SMI into a compressed file to transmit to the analysis module 203. Alternatively, the musical equipment 206 or the mobile device MD can also directly transmit the instrument signal SMI to the analysis module 203.
In any one of the embodiments of the present disclosure, the cloud system 105 includes the analysis module 202 and the generating module 203. The beat pattern BP of the accompaniment pattern DAP is a drum pattern. The plurality of beat patterns PDP of the pre-built database PDB are a plurality of drum patterns PDP, each of which corresponds to a second complexity 2COMX and a second timbre 2TIMB.
In any one of the embodiments of the present disclosure, the method S20 further includes steps of: step (a): obtaining a database PDB including a plurality of drum patterns PDP, each of which corresponds to a second complexity 2COMX and a second timbre 2TIMB; step (b): selecting a plurality of candidate drum patterns PDP from the database PDB according to a specific relationship between the first complexity 1COMX and the first timbre 1TIMB and the second complexity 2COMX and the second timbre 2TIMB, wherein each of the selected plurality of candidate drum patterns PDP has at least one of bass drum onsets ONS_BD1 and snare drum onsets ONE_SD1; step (c): determining whether the onsets ONS of the set of audio features DAF should be kept or deleted according to the onset weights ONSW respectively, in order to obtain processed onsets PONS, said determining includes one of the following steps: keeping fewer onsets if the first complexity 1COMX is low or the first timbre 1TIMB is soft; and keeping more onsets if the first complexity 1COMX is high or the first timbre 1TIMB is noisy; step (d): comparing the processed onsets PONS with at least one of the bass drum onsets ONS_BD1 and snare drum onsets ONS_SD1 of each of the selected plurality of candidate drum patterns CDP1 to give scores SCR respectively, and the more similar the bass drum onset ONS_BS1 and the snare drum onset ONS_SD1 to processed onsets PONS results in the higher score; step (e): selecting a first specific drum pattern CDP1 having a highest score SCR_H1 as a first part drum pattern 1DP; obtaining a third complexity 3COMX with complexity higher than that of the first complexity 1COMX; repeating steps (b), (c), (d) using the third complexity 3COMX instead of the first complexity 1COMX and determining a second specific drum pattern CDP2 having a highest score SCR_H2 from the selected plurality of candidate drum patterns PDP as a second part drum pattern 2DP but determining a third specific drum pattern CDP3 having a median score SCR_M as a third part drum pattern 3DP; adjusting a sound volume of each of the first part drum pattern 1DP, the second part drum pattern 2DP and the third part drum pattern 3DP according to the first timbre 1TIMB, wherein the sound volume decreases when the first timbre 1TIMB approaches clean or neat, and the sound volume increases when the first timbre 1TIMB approaches dirty or noisy; arranging the first part drum pattern 1DP, the second part drum pattern 2DP and the third part drum pattern 3DP for obtaining the drum pattern of the accompaniment pattern DAP.
In any one of the embodiments of the present disclosure, the method S20 further includes steps of performing a bass pattern generating methods, wherein the bass pattern generating method includes steps of: pre-building a plurality of bass patterns PBP in the database PDB, wherein the plurality of bass patterns PBP includes at least one of a first bass pattern P1BSP, a second bass pattern P2BSP and a third bass pattern P3BSP; corresponding the first bass pattern P1BSP, the second bass pattern P2BSP and the third bass pattern P3BSP to the first part drum pattern 1DP, the second part drum pattern 2DP and the third part drum pattern 3DP respectively. Specifically, generating a first set of bass timing points 1BSTP according to the processed onsets PONS respectively in the duration PDDR corresponding to the first part drum pattern 1DP, the second drum pattern 2DP and the third drum pattern 3DP; adding a second set of bass timing points 2BSTP at the time point without the first set of bass timing points 1BSTB in the duration PDDR, wherein the second set of bass timing points 2BSTP is generated according to the at least one of the bass drum onsets ONST_BD1 and the snare drum onsets ONS_SD1 of the first part drum pattern 1DP, the second part drum pattern 2DP and the third part drum pattern 3DP. For example, if the bass drum onsets ONST_BD1 and the snare drum onsets ONS_SD1 of the first drum pattern have a specific timing point corresponding to no timing point of the processed onsets used to generate the first drum pattern; then add a bass timing point at the specific timing point. Next, generating a first part bass pattern 1BSP having onsets ONS at the corresponding time points of first set of bass timing points 1BSTP and the second set of bass timing points 2BSTP, wherein the first part bass pattern 1BSP at least partially corresponds to the first bass pattern P1BSP and has notes and pitches of the notes are determined based on a music theory with the chord information CHD. Similarly, a second part pattern 2BSP and a third part bass pattern 3BSP can be also generated by the same way as that of the first part bass pattern 1BSP, wherein the second part bass pattern 2BSP and the third part bass pattern 3BSP are at least partially corresponds to the second bass pattern P2BSP and the third bass pattern P3BSP respectively.
Please refer to
In any one of the embodiments of the present disclosure, the musical signal SMU is associated with a database PDB having plural sets of pre-build chord information PCHD including the set of chord information CHD of the musical signal SMU. The cloud system 105 or the output module 205 provides the user 200 with the playing assistance information IPA having a difficulty level according to the user's skill level.
Please refer to
In any one of the embodiments of the present disclosure, the accompaniment generating system 10 further includes at least one of a mobile MD and a musical equipment 104, wherein the set of audio features DAF include onsets ONS and chord information CHD. The accompaniment pattern DAP is generated according to the onsets ONS and chord information CHD of the set of audio features DAF. The method S30 further includes steps of: obtaining an accompaniment signal SA according to the accompaniment pattern DAP; amplifying the accompaniment signal SA by a digital amplifier 1041, 2062; and outputting the amplified accompaniment pattern signal SOUT by a speaker 2063. The method S30 further includes steps of: inputting at least one of a beat per minute BPM, time signature TS and a genre information GR into the mobile device MD by a user USR, or automatically detecting the at least one of the bpm BMP, time signature TS and the genre GR by the cloud system 105, wherein the raw signal SR is generated by a musical instrument MI played by the user USR and the accompaniment pattern DAP includes at least one of a beat pattern BP and a chord pattern CP; receiving the musical pattern signal SMP by the musical equipment 104 or by the mobile device MD, wherein the mobile device MD is connected with the musical equipment 104, the musical equipment 104 is connected with the musical instrument MI, and the musical pattern signal SMP is transmitted to the cloud system 105 by the mobile device MD or the musical equipment 104. In some embodiment, the musical pattern signal SMP is compressed into a compressed musical pattern signal with a compressed format so as to be transmitted to the cloud system 105.
In any one of the embodiments of the present disclosure, the method S30 further includes steps of: detecting a global onset GONS of the musical pattern signal SMP to exclude a redundant sound RS before the global onset GONS; and calculating a beat timing point BTP of each measure of the accompaniment pattern DAP according to the bpm BPM and the time signature TS.
In any one of the embodiments of the present disclosure, the set of audio features DAF includes at least one of an entropy ENP, onsets ONS, onset weights ONSW of the onsets ONS, a mel-frequency cepstral coefficients of a spectrum MFCC, a spectral complexity SC, a roll off frequency of a spectrum ROFS, a spectral centroid SC, a spectral flatness SF, a spectral flux SX and a danceability DT. Each of the onset weights ONSW is calculated by a corresponding note volume NV and a corresponding note duration NDUR of the musical pattern signal SMP. The method S30 further includes steps of: calculating an average value AVG of each of the set of audio features DAF in each measure of the musical pattern signal SMP; and determining a first complexity 1COMX and a first timbre 1TIMB by inputting the average value AVG into a support vector machine model SVM.
In any one of the embodiments of the present disclosure, a first complexity 1COMX and a first timbre 1TIMB are derived from the set of audio features DAF and the set of audio features DAF include onsets ONS and onset weights ONSW of the onsets ONS. The method S30 further includes sub-steps of: sub-step (a): obtaining a database PDB including a plurality of drum patterns PDP, each of which corresponds to a second complexity 2COMX and a second timbre 2TIMB; sub-step (b): selecting a plurality of candidate drum patterns CDP1 from the database PDB according to a similarity degree SD between the second complexity 2COMX and the second timbre 2TIMB and the first complexity 1COMX and the first timbre 1TIMB (for example, a distance between the two coordination point in
In any one of the embodiments of the present disclosure, the method S30 further includes steps of: obtaining a third complexity 3COMX with complexity higher than that of the first complexity 1COMX; repeating steps (b), (c), (d) using the third complexity 3COMX instead of the first complexity 1COMX and determining a second specific drum pattern CDP2 having a highest score SCR_H2 from the selected plurality of candidate drum patterns PDP as a second part drum pattern 2DP but determining a third specific drum pattern CDP3 having a median score SCR_M as a third part drum pattern 3DP; adjusting a sound volume of each of the first part drum pattern 1DP, the second part drum pattern 2DP and the third part drum pattern 3DP according to the first timbre 1TIMB, wherein the sound volume decreases when the first timbre 1TIMB approaches clean or neat, and the sound volume increases when the first timbre 1TIMB approaches dirty or noisy; arranging the first part drum pattern 1DP, the second part drum pattern 2DP and the third part drum pattern 3DP for obtaining the drum pattern of the accompaniment pattern DAP.
In any one of the embodiments of the present disclosure, the first, second and third part drum patterns 1DP, 2DP, 3DP can be a verse drum pattern, a chorus drum pattern and a bridge drum pattern respectively. The song structure can be any combination of the first, second and third part drum patterns 1DP, 2DP, 3DP, and they can be repeated or continuous for the same drum pattern. Preferably, the song structure includes a specific combination of 1DP, 2DP, 3DP and 2DP.
In any one of the embodiments of the present disclosure, the method S30 further includes steps of: pre-building a plurality of bass patterns PBP in the database PDB, wherein the plurality of bass patterns PBP includes at least one of a first bass pattern P1BSP, a second bass pattern P2BSP and a third bass pattern P3BSP; corresponding the first bass pattern P1BSP, the second bass pattern P2BSP and the third bass pattern P3BSP to the first part drum pattern 1DP, the second part drum pattern 2DP and the third part drum pattern 3DP respectively; generating a first set of bass timing points 1BSTP according to the processed onsets PONS respectively in the duration PDUR; adding a second set of bass timing points 2BSTP at the time point without the first set of bass timing points 1BSTB in the duration PDUR, wherein the second set of bass timing points 2BSTP is generated according to the processed bass drum onsets ONST_BD1 and the processed snare drum onsets ONS_SD1. For example, if the bass drum onsets ONST_BD1 and the snare drum onsets ONS_SD1 of the first drum pattern have a specific timing point corresponding to no timing point of the processed onsets used to generate the first drum pattern; then add a bass timing point at the specific timing point. Next, generating a first part bass pattern 1BSP having onsets ONS on the first set of bass timing points 1BSTP and the second set of bass timing points 2BSTP, wherein the first part bass pattern 1BSP at least partially corresponds to the first bass pattern P1BSP and has notes and pitches of the notes are determined based on a music theory with the chord information CHD. Similarly, a second part bass pattern 2BSP and the third part bass pattern 3BSP can be also generated by the same way as that of the first part bass pattern 1BSP, wherein the second part bass pattern 2BSP and the third part bass pattern 3BSP are at least partially corresponds to the second bass pattern P2BSP and the third bass pattern P3BSP respectively.
In any one of the embodiments of the present disclosure, the method S30 further includes an AI method to generate a first and a second bass pattern. The AI method includes steps of: generating a model 301 by a machine learning method, wherein training dataset used by the machine learning method includes plural sets of onsets ONS of an existing guitar rhythm pattern, existing drum pattern and existing bass pattern; and generating a first part bass pattern 1BSP having notes, wherein time points of the notes are determined by inputting the onsets ONS of the musical pattern signal SMP, the first part drum pattern 1DP, the second part drum pattern 2DP, and the third part drum pattern 3DP into the model and pitches of the notes are determined based on a music theory. A second part and third part bass patterns 2BSP, 3BSP can also be generated by the same method.
In any one of the embodiments of the present disclosure, the musical signal SMU is associated with a database PDB having plural sets of pre-build chord information PCHD including the set of chord information CHD of the musical signal SMU. The cloud system 105 or the output module 205 provides the user 200 with the playing assistance information IPA having a difficulty level according to the user's skill level.
While the invention has been described in terms of what is presently considered to be the most practical and preferred embodiments, it is to be understood that the invention needs not be limited to the disclosed embodiments. On the contrary, it is intended to cover various modifications and similar arrangements included within the spirit and scope of the appended claims which are to be accorded with the broadest interpretation so as to encompass all such modifications and similar structures.
Number | Name | Date | Kind |
---|---|---|---|
4457203 | Schoenberg | Jul 1984 | A |
9158760 | Neuhauser | Oct 2015 | B2 |
9195649 | Neuhauser | Nov 2015 | B2 |
9318086 | Miller | Apr 2016 | B1 |
10453435 | Nariyama | Oct 2019 | B2 |
10887033 | Tessmann | Jan 2021 | B1 |
20080188967 | Taub | Aug 2008 | A1 |
20120174737 | Risan | Jul 2012 | A1 |
20190012995 | Lupini | Jan 2019 | A1 |
20210096810 | Kim | Apr 2021 | A1 |
20210125592 | Rein | Apr 2021 | A1 |
20210152908 | Aviv | May 2021 | A1 |
Number | Date | Country |
---|---|---|
1065651 | Mar 2016 | EP |
WO-8200379 | Feb 1982 | WO |
WO-2016009444 | Jan 2016 | WO |
WO-2020249870 | Dec 2020 | WO |
Number | Date | Country | |
---|---|---|---|
20220044666 A1 | Feb 2022 | US |