Many audio reproduction systems are capable of recording, transmitting, and playing back synchronous multi-channel audio, sometimes referred to as “surround sound.” Though entertainment audio began with simplistic monophonic systems, it soon developed two-channel (stereo) and higher channel-count formats (surround sound) in an effort to capture a convincing spatial image and sense of listener immersion. Surround sound is a technique for enhancing reproduction of an audio signal by using more than two audio channels. Content is delivered over multiple discrete audio channels and reproduced using an array of loudspeakers (or speakers). The additional audio channels, or “surround channels,” provide a listener with an immersive listening experience.
Surround sound systems typically have speakers positioned around the listener to give the listener a sense of sound localization and envelopment. Many surround sound systems having only a few channels (such as a 5.1 format) have speakers positioned in specific locations in a 360-degree arc about the listener. These speakers also are arranged such that all of the speakers are in the same plane as each other and the listener's ears. Many higher-channel count surround sound systems (such as 7.1, 11.1, and so forth) also include height or elevation speakers that are positioned above the plane of the listener's ears to give the audio content a sense of height. Often these surround sound configurations include a discrete low-frequency effects (LFE) channel that provides additional low-frequency bass audio to supplement the bass audio in the other main audio channels. Because this LFE channel requires only a portion of the bandwidth of the other audio channels, it is designated as the “.X” channel, where X is any positive integer including zero (such as in 5.1 or 7.1 surround sound).
In traditional channel-based multichannel sound systems, a bass management technique collects the bass from the main audio channels to drive the one or more subwoofers. Because with bass management the main speakers only have to reproduce the higher-frequency portion of the audio signal and not the bass signal, the main speakers can be smaller. Moreover, in traditional channel-based multichannel sound systems the audio signal is output to a specific speaker or speakers in a playback environment.
Audio object-based sound systems use informational data (including positional data in 3D space) associated with each audio object to position the object in the playback environment. Audio object-based systems are indifferent to the number of speakers in the playback environment. And the multitude of possible speaker configurations in playback environments increases the likelihood for bass overload when using traditional bass management systems. In particular, the bass signal is summed by amplitude and as multiple coherent bass signals are added together there is the possibility for playing back bass signals at an undesirably high amplitude. This phenomenon is sometimes called “bass build-up.” In other words, the electrical summation of coherent bass signals tends to overemphasize the result compared to how those signals would sound if each were reproduced acoustically by a full-range speaker. This bass build-up problem is exacerbated when audio object-based audio is used.
“Bass management” (also known as “bass redirection”) is a phrase used to describe the process of collecting the low-frequency signals from a number of audio channels (or speakers) and redirecting it to a subwoofer. Classic bass management techniques use low-pass filters to isolate the low-frequency portion (or bass signal) of audio channel. The bass signal of each audio channel then is summed along with the low-frequency effects signal to form the subwoofer signal that is reproduced using the subwoofer. Speakers typically differ in their ability to reproduce bass. Speakers with smaller woofers (approximately 6″ and less) are less capable of producing very low or deep bass as compared with larger speakers or speakers specifically designed for bass reproduction (such as subwoofers).
Going from mono to stereo to more and more speakers within a sound system, in the end there are all these additional channels, but we still want to distill them down to one signal that we feed the subwoofer. This is because the subwoofer reproduces very low-frequencies and humans don't respond well in terms of directionality to very low frequencies. The perception will be that the subwoofer handles the bass of sounds placed anywhere in the playback environment.
When using audio object-based sound systems the bass build-up problem is exacerbated due mainly to two issues. First, the playback environment may be grouped into playback zones and the bass signal at some zones may not be desirable all the time. Many cinemas have subwoofers in the back walls to represent the bass from the surrounds in the rear speakers and subwoofers from behind the screen for handling the bass from those speakers. For example, the playback environment may be a cinema with the speakers grouped into two playback zones the front of the room (behind the screen) and the rear of the room. Each of the playback zones has a subwoofer. In some cases it may be desirable to reproduce a bass signal on the subwoofer in the rear playback zone but not the front playback zone. The bass frequencies tend to blend better with higher-frequency audio if the bass signal is close to the other sound coming out of the regular speakers that it is associated with.
Another issue is that object audio is unique in that there is size control over the sound. This allows us to spread the sound from one or two speakers to as many as all the speakers. No matter the size is adjusted it is desirable to spread its coverage but not to change the ratio of the bass sound to the main sound.
One simplistic way to overcome these problems is to apply a fixed scaling factor (or gain coefficient) to each of the bass signals. However, this is only correct for the assumed signals, because it is a first order approximation. It is not a precise way of controlling bass buildup.
A more sophisticated bass management technique extracts the bass signal prior to the spatial rendering of any audio objects. The shortcomings of this technique is that it does not support bass management within subset zones of speakers. This means that if there are speakers that should not be included in the bass management the collected bass signal is mixed back into that speaker such that the speaker's bass signal is still being distributed to the subwoofer. Moreover, that speaker is not only reproducing the bass originally destined for it, but bass from all the other bass-managed speakers as well.
Another type of bass management technique uses wave-field synthesis (WFS). This technique scales the gain of each audio object in order to achieve the correct level of bass from a subwoofer. However, it is not possible, in an error-free manner, to transfer a mix of a subwoofer channel between WFS systems having different loudspeaker densities and a different number of loudspeakers. Moreover, there is no intent and no means to directly address bass buildup resulting from the number of loudspeakers involved.
This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
Embodiments of the bass management system and method are used to maintain the correct balance of the bass reproduced by the subwoofer relative to the sound coming out of the other speakers. The system and method are useful for a variety of different speaker configurations, including speaker configurations having different speaker sub-zones.
In embodiments of the system and method only the bass relevant to a certain zone of speakers is collected for that zone's subwoofer. Any speakers that are excluded from bass management (e.g., L, C, R screen speakers), will receive only the bass appropriate for them (their respective channels plus bass from objects positioned within a certain proximity). The main benefits of embodiments of the system and method are improved sound localization, more uniform spectral balance across the audience, more seamless time blending of the subs with main speakers, and increased headroom.
Embodiments of the system and method assume that all sounds emanate from a consistent distance. No wave field property metadata is used, as it does not exist. Moreover, embodiments of the system and method are power preserving and work for any renderer that generates power-normalized speaker gains across one or more speakers.
Embodiments of the bass management method process an audio signal by inputting or receiving from a renderer a number of power-normalized speaker gain coefficients. The audio signal contains an audio object and associated rendering information. The number of gain coefficients is such that there is a gain coefficient for each speaker channel and each audio object. The method combines the gain coefficients and computes the power of the combined gain coefficients to obtain a power-preserving subwoofer contribution coefficient. Power preserving means that the power of the combined gain coefficients is preserved.
Embodiments of the method also apply the subwoofer contribution coefficient to a subwoofer audio signal to obtain a gain-modified subwoofer audio signal. The subwoofer audio signal is the signal containing the low-frequency or bass portion of the audio signal and audio objects. In some embodiments this bass portion is obtained by using a low-pass filter to strip the low frequencies from the audio signal and audio objects. The gain-modified subwoofer audio signal is played back through a subwoofer to ensure that an amount of bass signal is applied to the subwoofer avoids bass management error. Moreover, embodiments of the method ensures that when the audio objects are spatially rendered in the audio environment that amount of subwoofer contribution is correct for each of the multiple audio objects and that any bass management errors are avoided or mitigated.
In some embodiments the speakers in the audio environment are divided into multiple speaker zones. In some embodiments these speaker zones contain a different number of speakers, different types of speakers, or both. This is as compared to other speaker zones in the audio environment. In the case of multiple speaker zone embodiments a subwoofer contribution coefficient is computed for each of the speaker zones. In some embodiments the subwoofer contribution coefficient is computed for each subwoofer in the multiple speaker zones.
The power of the combined gain coefficients is obtained by first squaring each of the gain coefficients and obtaining squared gain coefficients. These squared gain coefficients are summed or added together to obtain a squared sum. The square root of the square sum is taken and the result is the subwoofer contribution coefficient. If there are multiple speaker zones then only the gain coefficients from the speakers contained in the particular speaker zone (including the subwoofer) are used in the calculation of the subwoofer contribution coefficient.
It should be noted that alternative embodiments are possible, and steps and elements discussed herein may be changed, added, or eliminated, depending on the particular embodiment. These alternative embodiments include alternative steps and alternative elements that may be used, and structural changes that may be made, without departing from the scope of the invention.
Referring now to the drawings in which like reference numbers represent corresponding parts throughout:
In the following description of embodiments of a bass management system and method reference is made to the accompanying drawings. These drawings shown by way of illustration specific examples of how embodiments of the bass management system and method may be practiced. It is understood that other embodiments may be utilized and structural changes may be made without departing from the scope of the claimed subject matter.
Following are some basic terms and concepts used in this document. Note that some of these terms and concepts may have slightly different meanings than they do when used with other audio technologies.
This document discusses both channel-based audio and object-based audio. Music or soundtracks traditionally are created by mixing a number of different sounds together in a recording studio, deciding where those sounds should be heard, and creating output channels to be played on each individual speaker in a speaker system. In this channel-based audio, the channels are meant for a defined, standard speaker configuration. If a different speaker configuration is used, the sounds may not end up where they are intended to go or at the correct playback level.
In object-based audio, all of the different sounds are combined with information or metadata describing how the sound should be reproduced, including its position in a three-dimensional (3D) space. It is then up to the playback system to render the object for the given speaker system so that the object is reproduced as intended and placed at the correct position. With object-based audio, the music or soundtrack should sound essentially the same on systems with different numbers of speakers or with speakers in different positions relative to the listener. This methodology helps preserve the true intent of the artist.
The phrase “gain coefficient” is an amount by which the level of an audio signal is adjusted to increase or decrease its volume. The term “rendering” indicates a process to transform a given audio distribution format to the particular playback speaker configuration being used. Rendering attempts to recreate the playback spatial acoustical space as closely to the original spatial acoustical space as possible given the parameters and limitations of the playback system and environment.
When either surround or elevated speakers are missing from the speaker layout in the playback environment, then audio objects that were meant for these missing speakers may be remapped to other speakers that are physically present in the playback environment. In order to enable this functionality, “virtual speakers” can be defined that are used in the playback environment but are not directly associated with an output channel. Instead, their signal is rerouted to physical speaker channels by using a downmix map.
Subwoofers are a common way to extend the bass response in home audio systems. Subwoofers in the home allow the main speakers to be smaller, less expensive, and more easily replaced. This is especially useful in surround sound systems that include 5, 7, or more speakers. In these systems, “bass management” techniques apply crossover filters (complementary low-pass and high-pass filters) to redirect the bass frequencies from the main channels, add them together, and present the combined signal to the subwoofer.
Historically, cinemas have used subwoofers for many decades, driven from a specific LFE channel in the soundtrack. However, bass management typically was not used. Current 5.1 cinemas have multiple surround speakers distributing the surround channels around the audience. There may be 5, 10 or more speakers in a surround array all carrying the same signal and thus sharing the load.
With the advent of object-based audio for film sound, such as multi-dimensional audio (MDA), each speaker is driven individually. Thus, each speaker may carry unique signals or play in isolation. There is now a desire to improve the sound quality of the surround speakers to better match the screen channels. This means as sounds are panned around the cinema the perceived quality remains more consistent. Bass management is seen as an effective means to improve the bass capability and power handling of the surround speakers. This requires every surround speaker's signal to be included in the bass management system and method.
However, one problem with the arrangement shown in
When two identical signals are electrically summed the result is 6 dB stronger. In contrast, when those two signals are played in separate speakers in a cinema, the acoustic summation will be only 3 dB stronger. That means the subwoofer level with traditional bass management summing will be 3 dB too high. If there were four source signals the error would increase to 6 dB. A modern immersive cinema may have some 30-50 speakers in total, with almost half of them feeding a bass management system. The excessive bass buildup will be significant. Because the positioning and allocation of the audio signals among the speakers changes dynamically, there is no fixed gain offset that can correctly compensate for the error buildup problem. Moreover, with an object-based system the final rendering configuration is unknown. Therefore, when applying bass management to an object-based system, the bass management system must be more intelligent as compared to standard bass management systems.
Embodiments of the bass management system and method mitigate bass management error by using explicit information available in the object audio rendering process to derive the correct subwoofer contribution for each audio object. Embodiments of the system and method are suitable for use in commercial cinema processors, or in non-real time pre-rendering process that may run in in a cinema media block (server). In addition, this process may prove useful in object-based consumer surround processors.
The speaker configuration shown in
The cinema environment 500 also includes a Top-Surround Right (Tsr) array of n number of speakers including speakers Tsr1 to Tsr(n). Similarly, on the left side of the cinema is a Top-Surround Left (Tsl) array of n number of speakers including speakers Tsl1 to Tsl(n). Once again for clarity and to avoid clutter in the drawing the individual speakers in the Tsl array are not shown in
For pedagogical purposes and to avoid clutter,
The system and method shown in
The system renderer uses a mathematical process to determine exactly how much of any given sound is going to any given speaker. This information is used to determine how much bass is being duplicated into different speakers. The computation takes all the different gain coefficients, sums them together, and uses that to modulate the amount of bass that is going out from that signal to a subwoofer.
In
In order to determine a subwoofer contribution coefficient for a subwoofer, the gain coefficients of the gain coefficient array 610 are processed based on the subwoofer zones of which they are a part. As explained in detail below, the processing to obtain the subwoofer contribution coefficient includes computing the power of the gain coefficients to compute the power-preserving subwoofer contribution coefficient for each subwoofer. The gain coefficients may change dynamically as the soundtrack changes. In some embodiments a smoothing function is used to mitigate audible artifacts as the computed subwoofer contribution coefficients modulate the audio feeding the subwoofer.
The gain coefficients are applied to the waveform dependent on whether the signal destination is a regular speaker or a subwoofer in the coefficient applicator section of the system 600 and method (box 620). If the destination is a regular speaker the gain coefficient is applied to the waveform and gain-modified signal is sent to the speaker output busses (box 630). Crossover filters are applied (box 640) and the processed audio signal is played back on the respective speakers (box 650).
If the destination is a subwoofer for the speaker zone then the system 600 and method computes a subwoofer contribution coefficient for the subwoofer. The derivation of the subwoofer contribution coefficient for one object feeding the Rs Sub zone subwoofer is shown box 660 of
The same process applies to all objects in the soundtrack, with their outputs merged in the speaker output busses, and then fed to the bass management high-pass and low-pass crossover filters. Embodiments of the system 600 and method make use of the rendering information, which includes how much of the audio object is going to each speaker (including subwoofers).
It should be noted that the manner in which the gain coefficients are determined is completely irrelevant to the renderer algorithm. The bass management system 600 and method described herein are not just for VBAP, MDA, or specific to any one type of renderer. In fact it is independent of the renderer. All the rendering is performed upstream of embodiments of the bass management system 600 and method described herein. It simply makes no difference which rendering algorithm is used.
Each of the gain coefficients represents a scale factor, in terms of amplitude of sound. So the powers of all those gain coefficients are summed together to represent a final gain coefficient. In effect it is the root mean square (RMS) of the gain coefficients. This is represented by Equation (1) set forth below.
It is desirable to use the power of the signal and not just the sum of the gain coefficients. This is because if the gain coefficients are summed only the result is the intensity of the sound, rather than the power of the sound. The acoustic representation that should be used is represented by the power of those contributions. When rendering sound across numerous speakers and it is desirable to maintain the same subjective loudness across the speakers and then maintain the same electrical power. That is why the electrical power term is the relative metric here for the bass.
Moreover, that is what is violated when all the signals together are simply added together. When adding all the signals together it no longer represents the power, but the intensity. Acoustically this is where the disparity arises.
In an object-based system, the playback system's renderer is the mechanism that controls the allocation of audio signals among the available speakers. Multiple rendering functions may operate in parallel on a given audio object, such as VBAP, Divergence, or Aperture. Each function determines the appropriate allocation of the waveform across the relevant speakers. The allocations are controlled by gain coefficients for each speaker. When multiple functions are operating in parallel on the waveform feeding a single speaker, the gain coefficients are first multiplied together to obtain a final gain coefficient before being applied to the waveform.
Each final gain coefficient represents a direct measure of the signal level of the waveform feeding each speaker. This explicit knowledge has never been available to a playback system before, and it allows the bass management system 600 to accurately calculate the acoustic power of the object's waveform across every speaker involved in bass management. That resulting power value represents the desired amount of bass signal to be fed to the subwoofer. The final gain coefficients for each speaker are shown as g1 through gn in
In the embodiment shown in
subwoofer contribution coefficient=waveform√{square root over (g42+g52 . . . +gn2)} (1).
Equation (1) is used to compute a subwoofer contribution coefficient for the audio object.
The general operation of embodiments of the bass management system 600 and method shown in
Alternate embodiments are possible where all speakers are uniformly bass managed to a common subwoofer, as may be the case in smaller scale installations, either commercial or consumer oriented. These alternate embodiments do not require any calculation of coefficients. This is possible because the audio feeding the subwoofer is taken prior to the rendering operation, thereby avoiding the summation of multiple copies of the audio.
The embodiments shown in
The embodiments of
As shown in
The Objects are rendered and main processing 740 is applied to the Objects and subs processing 750 is applied to the low-frequency signal. Both the processed main object signal and the processed low-frequency signal are played back in an audio environment 760. In some embodiments the processed main object signal is run through a surround processor (not shown) that spreads it between surround sound speaker (typically 5, 7, or 11 speakers. The surround processor performs spatial rendering of the multiple audio objects in the audio environment over the surround sound speakers such that they form a surround sound configuration in the audio environment. The processed low-frequency bass can either be put back in or sent through a subwoofer.
Some embodiments of the bass management system and method include a metadata parameter called a Rendering Exception parameter. The Rendering Exception parameter allows any gain changes to be made in the renderer an when there is a renderer exception. This occurs after the bass from all the objects has been corrected and it is desirable to change how much of that object is represented in a speaker further downstream. If the level of the object is changing then it is also prudent to change how much of its bass is represented.
Specifically, in
The Objects are rendered in accordance with any gain changes made in the OBAE renderers. Main processing 845 is applied to the Objects and subs processing 850 is applied to the low-frequency signal. Both the processed main object signal and the processed low-frequency signal are played back in an audio environment 860. Similar to the embodiments shown in
Embodiments of the bass management system and method shown in
Many other variations than those described herein will be apparent from this document. For example, depending on the embodiment, certain acts, events, or functions of any of the methods and algorithms described herein can be performed in a different sequence, can be added, merged, or left out altogether (such that not all described acts or events are necessary for the practice of the methods and algorithms). Moreover, in certain embodiments, acts or events can be performed concurrently, such as through multi-threaded processing, interrupt processing, or multiple processors or processor cores or on other parallel architectures, rather than sequentially. In addition, different tasks or processes can be performed by different machines and computing systems that can function together.
The various illustrative logical blocks, modules, methods, and algorithm processes and sequences described in connection with the embodiments disclosed herein can be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, and process actions have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. The described functionality can be implemented in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of this document.
The various illustrative logical blocks and modules described in connection with the embodiments disclosed herein can be implemented or performed by a machine, such as a general purpose processor, a processing device, a computing device having one or more processing devices, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general purpose processor and processing device can be a microprocessor, but in the alternative, the processor can be a controller, microcontroller, or state machine, combinations of the same, or the like. A processor can also be implemented as a combination of computing devices, such as a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration.
Embodiments of the bass management system and method described herein are operational within numerous types of general purpose or special purpose computing system environments or configurations. In general, a computing environment can include any type of computer system, including, but not limited to, a computer system based on one or more microprocessors, a mainframe computer, a digital signal processor, a portable computing device, a personal organizer, a device controller, a computational engine within an appliance, a mobile phone, a desktop computer, a mobile computer, a tablet computer, a smartphone, and appliances with an embedded computer, to name a few.
Such computing devices can be typically be found in devices having at least some minimum computational capability, including, but not limited to, personal computers, server computers, hand-held computing devices, laptop or mobile computers, communications devices such as cell phones and PDA's, multiprocessor systems, microprocessor-based systems, set top boxes, programmable consumer electronics, network PCs, minicomputers, mainframe computers, audio or video media players, and so forth. In some embodiments the computing devices will include one or more processors. Each processor may be a specialized microprocessor, such as a digital signal processor (DSP), a very long instruction word (VLIW), or other micro-controller, or can be conventional central processing units (CPUs) having one or more processing cores, including specialized graphics processing unit (GPU)-based cores in a multi-core CPU.
The process actions of a method, process, or algorithm described in connection with the embodiments disclosed herein can be embodied directly in hardware, in a software module executed by a processor, or in any combination of the two. The software module can be contained in computer-readable media that can be accessed by a computing device. The computer-readable media includes both volatile and nonvolatile media that is either removable, non-removable, or some combination thereof. The computer-readable media is used to store information such as computer-readable or computer-executable instructions, data structures, program modules, or other data. By way of example, and not limitation, computer readable media may comprise computer storage media and communication media.
Computer storage media includes, but is not limited to, computer or machine readable media or storage devices such as Bluray discs (BD), digital versatile discs (DVDs), compact discs (CDs), floppy disks, tape drives, hard drives, optical drives, solid state memory devices, RAM memory, ROM memory, EPROM memory, EEPROM memory, flash memory or other memory technology, magnetic cassettes, magnetic tapes, magnetic disk storage, or other magnetic storage devices, or any other device which can be used to store the desired information and which can be accessed by one or more computing devices.
A software module can reside in the RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, a removable disk, a CD-ROM, or any other form of non-transitory computer-readable storage medium, media, or physical computer storage known in the art. An exemplary storage medium can be coupled to the processor such that the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium can be integral to the processor. The processor and the storage medium can reside in an application specific integrated circuit (ASIC). The ASIC can reside in a user terminal. Alternatively, the processor and the storage medium can reside as discrete components in a user terminal.
The phrase “non-transitory” as used in this document means “enduring or long-lived”. The phrase “non-transitory computer-readable media” includes any and all computer-readable media, with the sole exception of a transitory, propagating signal. This includes, by way of example and not limitation, non-transitory computer-readable media such as register memory, processor cache and random-access memory (RAM).
The phrase “audio signal” is a signal that is representative of a physical sound.
Retention of information such as computer-readable or computer-executable instructions, data structures, program modules, and so forth, can also be accomplished by using a variety of the communication media to encode one or more modulated data signals, electromagnetic waves (such as carrier waves), or other transport mechanisms or communications protocols, and includes any wired or wireless information delivery mechanism. In general, these communication media refer to a signal that has one or more of its characteristics set or changed in such a manner as to encode information or instructions in the signal. For example, communication media includes wired media such as a wired network or direct-wired connection carrying one or more modulated data signals, and wireless media such as acoustic, radio frequency (RF), infrared, laser, and other wireless media for transmitting, receiving, or both, one or more modulated data signals or electromagnetic waves. Combinations of the any of the above should also be included within the scope of communication media.
Further, one or any combination of software, programs, computer program products that embody some or all of the various embodiments of the bass management system and method described herein, or portions thereof, may be stored, received, transmitted, or read from any desired combination of computer or machine readable media or storage devices and communication media in the form of computer executable instructions or other data structures.
Embodiments of the bass management system and method described herein may be further described in the general context of computer-executable instructions, such as program modules, being executed by a computing device. Generally, program modules include routines, programs, objects, components, data structures, and so forth, which perform particular tasks or implement particular abstract data types. The embodiments described herein may also be practiced in distributed computing environments where tasks are performed by one or more remote processing devices, or within a cloud of one or more devices, that are linked through one or more communications networks. In a distributed computing environment, program modules may be located in both local and remote computer storage media including media storage devices. Still further, the aforementioned instructions may be implemented, in part or in whole, as hardware logic circuits, which may or may not include a processor.
Conditional language used herein, such as, among others, “can,” “might,” “may,” “e.g.,” and the like, unless specifically stated otherwise, or otherwise understood within the context as used, is generally intended to convey that certain embodiments include, while other embodiments do not include, certain features, elements and/or states. Thus, such conditional language is not generally intended to imply that features, elements and/or states are in any way required for one or more embodiments or that one or more embodiments necessarily include logic for deciding, with or without author input or prompting, whether these features, elements and/or states are included or are to be performed in any particular embodiment. The terms “comprising,” “including,” “having,” and the like are synonymous and are used inclusively, in an open-ended fashion, and do not exclude additional elements, features, acts, operations, and so forth. Also, the term “or” is used in its inclusive sense (and not in its exclusive sense) so that when used, for example, to connect a list of elements, the term “or” means one, some, or all of the elements in the list.
While the above detailed description has shown, described, and pointed out novel features as applied to various embodiments, it will be understood that various omissions, substitutions, and changes in the form and details of the devices or algorithms illustrated can be made without departing from the spirit of the disclosure. As will be recognized, certain embodiments of the inventions described herein can be embodied within a form that does not provide all of the features and benefits set forth herein, as some features can be used or practiced separately from others.
Moreover, although the subject matter has been described in language specific to structural features and methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.
Number | Date | Country | |
---|---|---|---|
62205660 | Aug 2015 | US |