This disclosure relates to microphone arrays and, in particular, to a multi-ringed circular differential microphone array (MR-CDMA) and associated beamformers.
Beamformers (or spatial filters) are used in sensor arrays (e.g., microphone arrays) for directional signal transmission or reception. A sensor array can be a linear array where the sensors are arranged approximately along a linear platform (such as a straight line) or a circular array where the sensors are arranged approximately along a circular platform (such as a circular line). Each sensor in the sensor array may capture a version of a signal originating from a source. Each version of the signal may represent the signal captured at a particular incident angle with respect to the corresponding sensor at a particular time. The time may be recorded as a time delay to a reference point such as, for example, a first sensor in the sensor array. The incident angle and the time delay are determined according to the geometry of the array sensor.
The present disclosure is illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings.
The captured versions of the signal may also include noise components. An array of analog-to-digital converters (ADCs) may convert the captured signals into a digital format (referred to as a digital signal). A processing device may implement a beamformer to calculate certain attributes of the signal source based on the digital signals.
Each sensor in a sensor array may receive a signal emitted from a source at a particular incident angle with a particular time delay to a reference point (e.g., a reference sensor). The sensor can be a suitable type of sensors such as, for example, microphone sensors that capture sound signals. A microphone sensor may include a sensing element (e.g., a membrane) responsive to the acoustic pressure generated by sound waves arriving at the sensing element, and an electronic circuit to convert the acoustic pressures received by the sensing element into electronic currents. The microphone sensor can output electronic signals (or analog signals) to downstream processing devices for further processing. Each microphone sensor in a microphone array may receive a respective version of a sound signal emitted from a sound source at a distance from the microphone array. The microphone array may include a number of microphone sensors to capture the sound signals (e.g., speech signals) and converting the sound signals into electronic signals. The electronic signals may be converted by analog-to-digital converters (ADCs) into digital signals which may be further processed by a processing device (e.g., a digital signal processor (DSP)). Compared with a single microphone, the sound signals received at microphone arrays include redundancy that may be exploited to calculate an estimate of the sound source to achieve certain objectives such as, for example, noise reduction/speech enhancement, sound source separation, de-reverberation, spatial sound recording, and source localization and tracking. The processed digital signals may be packaged for transmission over communication channels or converted back to analog signals using a digital-to-analog converter (DAC).
The microphone array can be communicatively coupled to a processing device (e.g., a digital signal processor (DSP) or a central processing unit (CPU)) that includes logic circuits programmed to implement a beamformer for calculating an estimate of the sound source. The sound signal received at any microphone sensor in the microphone array may include a noise component and a delayed component with respect to the sound signal received at a reference microphone sensor (e.g., a first microphone sensor in the microphone array). A beamformer is a spatial filter that is implemented on a hardware processor based on certain optimization rules and can be used to identify the sound source based on the multiple versions of the sound signal received at the microphone array.
The sound signal emitted from a sound source can be broadband signals such as, for example, speech and audio signals, typically in the frequency range from 20 Hz to 20 KHz. Some implementations of the beamformers are not effective in dealing with noise components at low frequencies because the beam-widths (i.e., the widths of the main lobes in the frequency domain) associated with the beamformers are inversely proportional to the frequency. To counter the non-uniform frequency response of beamformers, differential microphone arrays (DMAs) have been used to achieve frequency-invariant beam patterns and high directivity factors (DFs), where the DF describes sound intensity with respect to direction angles. DMAs may contain an array of microphone sensors that are responsive to the spatial derivatives of the acoustic pressure field. For example, the outputs of a number of geographically arranged omni-directional sensors may be combined together to measure the differentials of the acoustic pressure fields among microphone sensors. Compared to additive microphone arrays, DMAs allow for small inter-sensor distance, and may be manufactured in a compact manner.
DMAs can measure the derivatives (at different orders) of the acoustic fields received by the microphones. For example, a first-order DMA, formed using the difference between a pair of adjacent microphones, may measure the first-order derivative of the acoustic pressure fields, and the second-order DMA, formed using the difference between a pair of adjacent first-order DMAs, may measure the second-order derivatives of acoustic pressure field, where the first-order DMA includes at least two microphones, and the second-order DMA includes at least three microphones. Thus, an N-th order DMA may measure the N-th order derivatives of the acoustic pressure fields, where the N-th order DMA includes at least N+1 microphones. The N-th order is referred to as the differential order of the DMA. The directivity factor of a DMA may increase with the order of the DMA.
The microphone sensors in a DMA can be arranged either along a straight line (referred to as linear DMA) or along a curve. The curve may can be an ellipse and in particular, a circle (the corresponding DMA is referred to as circular DMA). Compared to linear DMA (LDMA), the circular DMA (CDMA) can be steered easily and have a substantially identical performance for sound signals from different directions. This is useful in situations such as, for example, when the sound comes from directions other than along a straight line (or the endfire direction).
CDMAs may include omnidirectional microphones placed on a planar surface substantially along the trace of a circle. An omnidirectional microphone is a microphone that picks up sound with equal gain from all sides or directions with respect to the microphone. CDMAs, however, may amplify white noise associated with the captured signals. The white noise may come from the device noise. Minimum-norm filters have been used to improve the white noise gain (WNG) by increasing the number of microphones used in a microphone array given the DMA order. Although a large number of microphones deployed in a microphone array may improve the WNG, the large number of microphones associated with the minimum-norm filters may result in a larger array aperture, and consequently, more nulls in lower frequency bands. A null is created when the responses from different frequency bands, when combined, cancel each other. The nulls may produce undesirable dead regions in the frequency response of the minimum-norm beamformers associated with CDMAs.
Concentric circular differential microphone arrays (CCDMAs) have been used to address the deficiencies of CDMAs. CCDMAs may include more than one circular rings of microphones, where each circular ring may include an identical number of microphones and all these rings may be concentric with respect to a common center. Further, the microphones of CCDMAs may be uniformly distributed on each one of the rings such that the microphones are aligned along radiating lines that partition the circles into each portions. Compared to the CDMAs where a single ring of microphones are used to form the microphone array, the CCDMAs may improve the WNG and eliminate the nulls. The current design of CCDMAs and the associated beamformers relies on the structure that each ring includes an identical number of uniformly-distributed microphones with respect to a center. Because CCDMAs includes rings having identical number of microphones on each ring, each ring needs to include 2 N+1 microphones on each ring to construct an Nth-order DMA. Thus, the inner most ring includes the same number of microphones as the outer most ring. However, the inner rings occupy much smaller area compared to the outer rings. Because each microphone occupies a certain amount of area, it is not practical to place a large number of microphones on the inner circles. This limitation prevents CCDMAs from being deployed in compact devices where the inner ring circles are small and cannot accommodate the same number of microphones as the outer ring circles. Further, CCDMAs require that microphones of different rings are aligned. This requirement may further limit the design of CCDMAs.
As the cost microphones and the cost for the hardware to process signals captured by the microphone arrays become more affordable, the DMA are designed into a wide range of intelligent systems to provide an interface with human users. Due to the restriction of the product designs, the microphone array may be limited to a compact area which may obstruct the construction of CCDMAs.
To overcome the above-identified and other deficiencies, implementations of the present disclosure provide a technical solution that may include a multi-ringed CDMA and an associated beamformer. The multi-ringed CDMA may include multiple circular rings of microphones. Compared to CCDMAs, each ring of the multi-ringed CDMA may include varying numbers of microphones, thus allowing the placement of fewer microphones on the inner rings. Further, the multi-ringed CDMA does not require that microphones on different rings being aligned along radiating lines because different rings may be associated with different numbers of microphones. Thus, the multi-ringed CDMA provides the flexibility for product design as it has fewer restrictions on the number of microphones on different rings and fewer restrictions on the placements of microphones on these rings.
Implementations of the disclosure may further provide a beamformer that matches the structure of the multi-ringed CDMA. To this end, the beam pattern associated with each ring of the multi-ringed CDMA can be represented by an approximation including a series of harmonics (e.g., using the Jacobi-Anger expansion), where the order of the representation is determined by the number of microphones in the ring. Thus, the outer rings may include more microphone associated with higher-order beamformers; the inner rings may include fewer microphones associated with lower-order beamformers. To achieve an N-th order beamformer, at least one of the rings includes at least 2 N+1 microphones. Based on these approximations, implementations may calculate an Nth order beamformer for the multi-ringed CDMA that may meet certain optimization criteria. In this way, implementations may achieve flexible multi-ringed CDMA structures that can be implemented in a wide range of product designs.
The microphone sensors in MR-CDMA 102 may receive acoustic signals originated from a sound source from a certain distance. In one implementation, the acoustic signal may include a first component from a sound source (s(t)) and a second noise component (v(t)) (e.g., ambient noise), wherein t is the time. Due to the spatial distance between microphone sensors, each microphone sensor may receive a different version of the sound signal (e.g., with different amount of delays with respect to a reference point such as, for example, a designated microphone sensor in MR-CDMA 102 or the origin (O)) in addition to the noise component.
Thus, the coordinates of the mth microphone in the pth ring can be represented as
rp,m=(rp cos Ψp,m, rp sin Ψp,m),
where p=1, 2, . . . , P, m=1, 2, . . . , Mp, and
is the angular position of the mth microphone on the pth ring, where the Mp microphones on the p-th ring are placed uniformly along the p-th circle, with Ψp,1>0 being the angular position of the first microphone of the p-th ring. Further, it is assumed that a source signal (plane wave) located in the far-field impinges on the multi-ringed array 200 from the direction (azimuth angle) θ, at the speed of sound (C) in the air, e.g., C=340 m/s.
Multi-ringed array 200 may be associated with a steering vector that characterizes the multi-ringed array 200. The steering vector may represent the relative phase shifts for the incident far-field waveform across the microphones in multi-ringed array 200. Thus, the steering vector is the response of multi-ringed 200 to an impulse input. For multi-ringed 200 that have P rings where each ring has a number (Mp) of microphones, the length of a steering vector is M=Σp=1pMp or the total number of microphones in multi-ringed array 200. The steering vector can be defined as
is the p-th ring's steering vector, the superscript T is the transpose operator, j is the imaginary unit where j2=−1, and
where ω=2 πf is the angular frequency, f>0 is the temporal frequency, and rp is the radius for the r-th ring. In one implementation, the inter-element spacing (i.e., Euclidean distance between two adjacent microphones) is less than half acoustic wavelength to avoid spatial aliasing.
For convenience, microphones in different rings may be labeled as mp,k, where p=1, 2, . . . P represent the index of the ring on which the microphone is located, and k=1, . . . Mp represent the index for a microphone on the p-th ring. Thus, microphone mp,k denotes the k-th microphone on the p-th ring. Microphones mp,k, where k=1, . . . Mp and p=1, 2, . . . P, may respectively receive an acoustic signal ap,k(t) originated from a sound source, where t is the time, k=1, . . . Mp, and p=1, 2, . . . P.
Referring to
In one implementation, the processing device 106 may include an input interface (not shown) to receive the digital signals yp,k(t), and as shown in
In one implementation, the pre-processing module 108 may perform STFT on the input yp,k(t) associated with microphone mp,k of MR-CDMA 102 and calculate the corresponding frequency domain representation Yp,k(ω)), wherein ω (ω=2 πf) represents the angular frequency domain, k=1, . . . Mp, p=1, 2, . . . P. In one implementation, MR-CDMA beamformer 110 may receive frequency representations Yp,k(ω)) of the input signals yp,k(t) and calculate an estimate Z(ω) in the frequency domain for the sound source (s(t)). The frequency domain may be divided into a number (L) of frequency sub-bands, and the MR-CDMA beamformer 110 may calculate the estimate Z(ω) for each of the frequency sub-bands.
The processing device 106 may also include a post-processor 112 that may convert the estimate Z(ω) for each of the frequency sub-bands back into the time domain to provide the estimate sound source represented as X1(t). The estimated sound source Xi(t) may be determined with respect to the source signal received at a reference microphone (e.g., microphone m1.1) in MR-CDMA 102.
Implementations of the present disclosure may include different types of MR-CDMA beamformers that can calculate the estimated sound source X1(t) using the acoustic signals captured by MR-CDMA 102. The performance of the different types of beamformers may be measured in terms of signal-to-noise ratio (SNR) gain and a directivity factor (DF) measurement. The SNR gain is defined as the signal-to-noise ratio at the output (oSNR) of MR-CDMA 102 compared to the signal-to-noise ratio at the input (iSNR) of MR-CDMA 102. When each of microphones mp,k is associated with white noise including substantially identical temporal and spatial statistical characteristics (e.g., substantially the same variance), the SNR gain is referred to as the white noise gain (WNG). This white noise model may represent the noise generated by the hardware elements in the microphone itself. Environmental noise (e.g., ambient noise) may be represented by a diffuse noise model. In this scenario, the coherence between the noise at a first microphone and the noise at a second microphone is a function of the distance between these two microphones.
The SNR gain for the diffuse noise model is referred to as the directivity factor (DF) associated with MR-CDMA 102. The DF quantifies the ability of the beamformer in suppressing spatial noise from directions other than the look direction. The DF associated with MR-DMA 102 may be written as:
where h(ω)=[h1T(ω). . . hpT(ω)]T is the global filter for the beamformer associated with MR-DMA 102, wherein hp(ω)=[Hp,1(ω) Hp,2(ω) . . . Hp,M
where δij∥ri−rj∥, is the distance between microphone i and microphone j, and ∥·∥ is the Euclidean norm and r1, rjϵ{r1,1, r1,2, . . ., rp,M
Additionally, MR-CDMA 102 may be associated with a beampattern (or directivity pattern) that reflects the sensitivity of the beamformer to a plane wave impinging on MR-CDMA 102 from a certain angular direction θ. The beampattern for a plane wave impinging from an angle θ for a beamformer represented by a filter h(ω) associated with MR-CDMA 102 can be defined as
where h(ω)=[h1T(ω) . . . hpT(ω)]T is the global filter for the beamformer associated with MR-CDMA 102, and the superscript H represents the conjugate-transpose transpose operator, and h(ω)=[h1T(ω) . . . hpT(ω)]T are the spatial filters of length Mp for the p-th ring.
In one implementation, the beampattern is substantially frequency-invariant. MR-CDMA 102 associated with a frequency-invariant beampattern may be used to acquire high fidelity speech and audio signals. Microphone arrays with non-frequency-invariant beampatterns may include distortions in the signal of interest after beamforming.
It is desirable to steer the beampattern to the direction θs which is the incident angle of the sound signal. The corresponding frequency-invariant beampattern can be written as B(aN, θ−θs)=Σn=0NaN,n cos(n(θ−θs)), where aN,n are the real coefficients that determines the different directivity patterns of the Nth-order DMA. The B(aN, θ−θs) may be rewritten as:
B(b2N, θ−θs)=Σn=−NNb2N,nejn(θ−θ
where b2N,0=aN,0, b2,1=1/2aN,i, i=±1, ±2, . . . , ±N,
(θs)=diag(ejNθ
is a (2 N+1)×(2 N+1) diagonal matrix and
b2N=[b2N,−N . . . b2N,0 . . . b2N,N]T,
Pe(θ)=[e−jNθ . . . 1 . . . ejNθ]T,
c
2n(θs)=(θs)b2N=[c2N,−N(θs) . . . c2N,0(θs) . . . c2N,N(θs)]T,
are vectors of length 2N+1, respectively, and c2n(θs) is the target beampattern. The main beam points in the direction of θs and B(b2N, θ−θs) is symmetric with respect to the axis θs⇄θs+π.
In the implementations of CCDMA, each ring is approximated by a N-th order Jacobi-Anger expansion. As discussed above, this approach that requires the same number of microphones for different rings makes it difficult to deploy CCDMAs in a compact space where the inner rings may not have enough space to accommodate the same number of microphones as the outer rings, thus preventing CCDMAs from being used in certain situations. To overcome this and other deficiencies of CCDMAs, implementations of the present disclosure provide for a beamformer that can accommodate different numbers of microphones in different rings, thus allowing fewer microphones in the inner rings than the outer rings. The beampattern for each ring may be approximated based on the number of microphones in that ring:
e
cos(θ−Ψ
)≈Σn=−N
In this case, the pth ring includes at least 2 Np+1 microphones. In one implementation, to design an Nth-order symmetric beampattern, at least one ring include 2 N+1 microphones to support the Nth-order Jacobi-Anger expansion, i.e.,
max{Np, p=1, 2, . . . , NP}≥N.
The outer rings may include more microphones. In on implementation, an outer ring is approximated with a higher Jacobi-Anger expansion than an inner ring, i.e., in a descending order from N1≤N2≤ . . . ≤NP. In another implementation, the order of Jacobi-Anger expansions is any order from the outer ring to the inner ring at long as at least one ring is associated with an Nth-order Jacobi-Anger approximation.
When written as follows:
where N is the highest order, β′n(
being binary coefficients. Substituting this representation, the beampattern can be written as:
where Jn(
where n=±1, ±2, . . . , ±N, and
is a vector of length MP. Written in vector form,
j
n
Ψ
n
T(ω)h*(ω)=c2N,n(θs),n=±1, ±2, . . . , ±N
where ΨnT=[α1,nJn(ω)Ψn,1T, α2,nJn(ω)Ψn,2T, . . . , αP,nJn(ω)Ψn,PT]T is a vector of length M. Thus, the beamforming filters can be obtained by solving
Ψ(ω)h(ω)=J**(θs)b2N,
where
is a (2 N+1)×(2 N+1) diagonal matrix and
is a (2 N+1)×M matrix, which is of full column rank. The minimum norm solution leads to
h
MN(ω)=ΨH(ω)[Ψ(ω)ΨH(ω)]−1J(θs)b2N.
where hMN(ω) may represent the MR-CDMA beamformer 110 associated with MR-CMDA 102. The MR-CDMA beamformer 110 can provide more flexibility to the design of MR-CMDA 102 because the beamformer 110 allows fewer microphones on the inner rings and does not require that microphones on different rings be aligned.
MR-CDMA beamformer 110 can include different numbers of rings, and each ring may include different numbers of microphones. The performance of MR-CDMA beamformer 110 may depend on the number of rings, the number of microphones in each ring, the radii of rings etc.
Bessel function decrease because the zeros of ring 302 and ring 304 occur at different frequencies. Further, a center microphone 308 may further boost the frequency response of MR-CDMA 306, thus improving the performance of MR-CDMAs. The experimental results further show that even when the microphones on different rings are not aligned, the frequency response of MR-CDMA is still substantially frequency-invariant.
For conciseness of discussion, MR-CDMAs are described using circular rings. However, MR-CDMAs are not limited to circular rings. For example, the ring shape can be ellipses or any suitable geometric shapes.
For simplicity of explanation, methods are depicted and described as a series of acts. However, acts in accordance with this disclosure can occur in various orders and/or concurrently, and with other acts not presented and described herein. Furthermore, not all illustrated acts may be required to implement the methods in accordance with the disclosed subject matter. In addition, the methods could alternatively be represented as a series of interrelated states via a state diagram or events. Additionally, it should be appreciated that the methods disclosed in this specification are capable of being stored on an article of manufacture to facilitate transporting and transferring such methods to computing devices. The term article of manufacture, as used herein, is intended to encompass a computer program accessible from any computer-readable device or storage media. In one implementation, the methods may be performed by the MR-CDMA beamformer 110 executed on the processing device 106 as shown in
Referring to
At 404, the processing device may receive a plurality of electronic signals generated, responsive to a sound source, by a first number of microphones situated along a first substantial circle having a first radius and by a second number of microphones situated along a second substantial circle having a second radius, wherein a multi-ringed differential microphone array comprises the first number of microphones and the second number of microphones located on a substantially planar platform, and wherein the first number is smaller than the second number.
At 406, the processing device may determine a differential order (N) based on the second number.
At 408, the processing device may execute an N-th order minimum-norm beamformer to calculate an estimate of the sound source based on the plurality of electronic signals.
The exemplary computer system 500 includes a processing device (processor) 502, a main memory 504 (e.g., read-only memory (ROM), flash memory, dynamic random access memory (DRAM) such as synchronous DRAM (SDRAM) or Rambus DRAM (RDRAM), etc.), a static memory 506 (e.g., flash memory, static random access memory (SRAM), etc.), and a data storage device 518, which communicate with each other via a bus 508.
Processor 502 represents one or more general-purpose processing devices such as a microprocessor, central processing unit, or the like. More particularly, the processor 502 may be a complex instruction set computing (CISC) microprocessor, reduced instruction set computing (RISC) microprocessor, very long instruction word (VLIW) microprocessor, or a processor implementing other instruction sets or processors implementing a combination of instruction sets. The processor 502 may also be one or more special-purpose processing devices such as an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), a digital signal processor (DSP), network processor, or the like. The processor 502 is configured to execute instructions 526 for performing the operations and steps discussed herein.
The computer system 500 may further include a network interface device 522. The computer system 500 also may include a video display unit 510 (e.g., a liquid crystal display (LCD), a cathode ray tube (CRT), or a touch screen), an alphanumeric input device 512 (e.g., a keyboard), a cursor control device 514 (e.g., a mouse), and a signal generation device 520 (e.g., a speaker).
The data storage device 518 may include a computer-readable storage medium 524 on which is stored one or more sets of instructions 526 (e.g., software) embodying any one or more of the methodologies or functions described herein (e.g., processing device 102). The instructions 526 may also reside, completely or at least partially, within the main memory 504 and/or within the processor 502 during execution thereof by the computer system 500, the main memory 504 and the processor 502 also constituting computer-readable storage media. The instructions 526 may further be transmitted or received over a network 574 via the network interface device 522.
While the computer-readable storage medium 524 is shown in an exemplary implementation to be a single medium, the term “computer-readable storage medium” should be taken to include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) that store the one or more sets of instructions. The term “computer-readable storage medium” shall also be taken to include any medium that is capable of storing, encoding or carrying a set of instructions for execution by the machine and that cause the machine to perform any one or more of the methodologies of the present disclosure. The term “computer-readable storage medium” shall accordingly be taken to include, but not be limited to, solid-state memories, optical media, and magnetic media.
In the foregoing description, numerous details are set forth. It will be apparent, however, to one of ordinary skill in the art having the benefit of this disclosure, that the present disclosure may be practiced without these specific details. In some instances, well-known structures and devices are shown in block diagram form, rather than in detail, in order to avoid obscuring the present disclosure.
Some portions of the detailed description have been presented in terms of algorithms and symbolic representations of operations on data bits within a computer memory. These algorithmic descriptions and representations are the means used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art. An algorithm is here, and generally, conceived to be a self-consistent sequence of steps leading to a desired result. The steps are those requiring physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, transferred, combined, compared, and otherwise manipulated. It has proven convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, or the like.
It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. Unless specifically stated otherwise as apparent from the following discussion, it is appreciated that throughout the description, discussions utilizing terms such as “segmenting”, “analyzing”, “determining”, “enabling”, “identifying,” “modifying” or the like, refer to the actions and processes of a computer system, or similar electronic computing device, that manipulates and transforms data represented as physical (e.g., electronic) quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage, transmission or display devices.
The disclosure also relates to an apparatus for performing the operations herein. This apparatus may be specially constructed for the required purposes, or it may include a general purpose computer selectively activated or reconfigured by a computer program stored in the computer. Such a computer program may be stored in a computer readable storage medium, such as, but not limited to, any type of disk including floppy disks, optical disks, CD-ROMs, and magnetic-optical disks, read-only memories (ROMs), random access memories (RAMs), EPROMs, EEPROMs, magnetic or optical cards, or any type of media suitable for storing electronic instructions.
The words “example” or “exemplary” are used herein to mean serving as an example, instance, or illustration. Any aspect or design described herein as “example’ or “exemplary” is not necessarily to be construed as preferred or advantageous over other aspects or designs. Rather, use of the words “example” or “exemplary” is intended to present concepts in a concrete fashion. As used in this application, the term “or” is intended to mean an inclusive “or” rather than an exclusive “or”. That is, unless specified otherwise, or clear from context, “X includes A or B” is intended to mean any of the natural inclusive permutations. That is, if X includes A; X includes B; or X includes both A and B, then “X includes A or B” is satisfied under any of the foregoing instances. In addition, the articles “a” and “an” as used in this application and the appended claims should generally be construed to mean “one or more” unless specified otherwise or clear from context to be directed to a singular form. Moreover, use of the term “an embodiment” or “one embodiment” or “an implementation” or “one implementation” throughout is not intended to mean the same embodiment or implementation unless described as such.
Reference throughout this specification to “one implementation” or “an implementation” means that a particular feature, structure, or characteristic described in connection with the implementation is included in at least one implementation. Thus, the appearances of the phrase “in one implementation” or “in an implementation” in various places throughout this specification are not necessarily all referring to the same implementation. In addition, the term “or” is intended to mean an inclusive “or” rather than an exclusive “or.”
It is to be understood that the above description is intended to be illustrative, and not restrictive. Many other implementations will be apparent to those of skill in the art upon reading and understanding the above description. The scope of the disclosure should, therefore, be determined with reference to the appended claims, along with the full scope of equivalents to which such claims are entitled.
This application is a continuation-in-part of International Patent Application No. PCT/IB2017/001436 filed Oct. 24, 2017 which claims priority to U.S. patent application Ser. No. 15/347,482 filed Nov. 9, 2016, the contents of which are incorporated by reference in their entirety.
Number | Date | Country | |
---|---|---|---|
Parent | 15347482 | Nov 2016 | US |
Child | PCT/IB17/01436 | US |
Number | Date | Country | |
---|---|---|---|
Parent | PCT/IB17/01436 | Oct 2017 | US |
Child | 16117186 | US |