The present application relates to group communications. In particular, the application relates to simultaneous reproduction of an audio signal in a group communication.
Group-directed communications are commonplace in enterprise and public safety communication systems. With regard to voice communications, one end device directs an audio stream (i.e., a “talkburst”) to a given group (i.e. a “talkgroup”) of receiving end devices. These receiving end devices reproduce the audio stream through an amplified speaker. The manner in which the receiving end devices operate usually results in the reproduced sound being audible to people other than merely the intended recipient. Typically, in these group-based systems, the receiving end devices are located near each other, causing their associated listeners to hear the same audio stream reproduced by multiple end devices. This is particularly true in public safety uses, in which personnel often respond to incidences in a group and this group (or a subset thereof) is located in the same local area for an extended period of time.
In order to ensure the audio stream is intelligible to the intended listeners in such an environment, it is desirable for collocated devices to reproduce the audio stream in a time synchronized fashion. In other words, it is desirable for all speakers in the collocated devices to reproduce the same audio waveform at roughly the same time. In practice, a temporal offset of about 30 ms or so between multiple audible speakers reproducing the same waveform is virtually undetectable to most listeners.
Synchronization methods for the homogeneous circuit-based wireless radio area networks (RANs) of the current generation of enterprise and public safety communication systems are unlikely to provide acceptable results in future generations of RANs, which are likely to span multiple narrowband circuit-switched and broadband packet-based broadband technologies. A variety of delays exist in such networks causing spreading and jitter problems. Sources of these problems include different amounts of time for different destination end devices to be paged and activated, packet duplication and retries in broadband wireless networks, and multitasking processing delays. Without a mechanism to compensate for the combined new and existing sources of destination-specific delay and jitter, each end device will reproduce audio in an autonomous fashion. This results in unintelligibility when two or more end devices are collocated.
Embodiments will now be described by way of example with reference to the accompanying drawings, in which:
Methods of compensating for non-uniform delays in group communications to coordinate audio reproduction are presented. A collocated homogeneous or heterogeneous group of end devices each have a processor, an antenna, a speaker, and multiple microphones. Audio emitted from the end devices may have delay phase responses that vary to a great enough extent to result in interference substantial enough to impair intelligibility. Compensation algorithms are used to time align the presentation time of such audio. The processor cross-correlates an audio stream received by the antenna with an audio stream received by one or more of the microphones of the end device and emitted from speakers of the collocated end devices. The processor in each of the collocated end devices determines the most delayed audio stream of the audio stream produced by the collocated end devices and uses a time shifting algorithm to delay the audio stream of its own output to that of the most delayed audio stream to synchronize audio reproduction of the collocated end devices. The reproduced audio from all of the end devices has a relatively small phase offset with respect to each other. The situations in which collocated media presentations may be used include one-to-many communication systems, two-way communication systems, and event sound reinforcement. Attenuation control of the speaker output may additionally or alternatively be provided.
As used herein, subscribers (also called end devices) are communication devices such as mobile radios that all receive the same audio stream from a transmitter. Each subscriber selects a particular channel through one or more user-actuated selectors for reproduction using one or more speakers. The subscriber is personally portable or mounted on a vehicle. The subscriber contains multiple microphones including a microphone for the user to speak into and a noise cancelling microphone.
Speaker audio is an acoustic audio stream played or sourced out of a speaker of a receiver or digital audio presented to the speaker. This audio stream can be received by a subscriber from various networks such as a broadband network, a narrowband network, or a personal area network (PAN). The speaker audio is not received from the noise cancelling microphone. This audio stream is represented as xN(m) in the cross-correlation calculation below. A PAN can be based on bluetooth or 802.11 and usually has a small coverage radius, e.g., up to about a hundred meters.
An audio source is an audio stream that has been received over the broadband network, narrowband network, PAN, digitally sampled from a noise cancelling microphone, etc. . . . This audio stream is represented as y(m) in the cross correlation calculation below.
A transmitter is a subscriber or other communication device (such as a central controller) that transmits a media stream containing audio.
A receiver receives the audio stream either directly from the transmitter or through wireless or wired communication infrastructure such as one or more intermediaries such as base stations and reproduces the speaker audio.
Collocated subscribers are end devices that are disposed in a relatively small area (e.g., a radius of up to about 100 meters) such that audio reproduction from one of the subscribers is able to audibly interfere with audio reproduction from another of the subscribers significantly enough to negatively influence the experience of the user of the other subscriber. Proximity is the distance between receivers whose speaker audio may interfere. This is detectable by a subscriber with a digital indication from infrastructure equipment or other subscribers through a narrowband, broadband, 802.11, Bluetooth, etc. radio link. It may also be indicated by energy that exceeds a nominal noise threshold on the noise cancelling microphone.
Homogeneous end devices are end devices of the same general type (e.g., push-to-talk devices), but not necessarily the same model. Heterogeneous end devices are end devices of different types (cell phones vs. push-to-talk radios).
An incidence is an event, such as an accident, in proximity to which collocated subscribers are gathered.
The cross-correlation calculation is described by the equation:
The terms xN(m) and y(m) are, respectively, the audio stream intended to be presented to the speaker and not intended to be presented to the speaker (e.g., the audio stream received from the noise canceling microphone). Interference is observed when the cross-correlation calculation is executed and peak(s) exceeding a threshold are detected. This indicates the audio streams being reproduced from the subscriber speakers interfere with each other if at least one subscriber's speaker audio is significantly delayed (e.g., >about 250 msecs) from another subscriber's speaker audio.
In various embodiments described below, audio reproduction compensation algorithms are used if interferers with a subscriber are detected. Subscribers at an incidence scene that have widely varying audio delays (>about 250 ms) may be interferers. The use of a compensation algorithm enables subscriber users at the incidence scene to understand the reproduced audio stream if interferers are present. The compensation algorithm uses cross-correlation to determine the most delayed or lagged audio stream and take action. Compensation algorithms include both delay sensitive and delay insensitive compensation algorithms. Both types of algorithms are also called time-shifting algorithms.
If an interferer is detected and the subscriber is configured to only use delay sensitive compensation algorithms, the audio presented to the leading speaker (the speaker reproducing the speaker audio earlier) may or may not be delayed depending on the amount of delay detected. If the interferer delay with respect to subscriber is small (within a delay-sensitive compensation lag threshold of, e.g., 30, 50, or 100 ms or anywhere therebetween), the audio presented to the speaker remains unaltered. If the delay of the interferer with respect to subscriber is large (greater than the lag threshold), the audio presented to the speaker of the subscriber is delayed or attenuated/muted.
If an interferer is detected and the subscriber is configured to only use delay insensitive compensation algorithms, the audio presented to the leading speaker is compensated with a delay insensitive compensation algorithm. One such algorithm delays the audio to be presented to the speaker by the delay calculated in the cross-correlation calculation. Thus, if any phase offset is present, the speaker audio is delayed by the amount determined by the cross-correlation. Another algorithm delays the audio to be presented to the leading speaker by a fixed amount.
One embodiment of a one-to-many network is shown in
In one embodiment of a one-to-many transmission, the transmitter 102 initiates a talkburst and sends the talkburst to a base station, which then transmits the talkburst to a controller. The controller forwards the talkburst to a base station. The controller provides time stamping of the talkburst. Depending on the embodiment, the base station transmits the talkburst to the appropriate receivers 104 at the time indicated by the time stamp or when it receives the talkburst independent of the time stamp. Real-time Transport Protocol/Real-time Transport Control Protocol (RTP/RTCP), the dominant protocol used to deliver streaming media over packet IP networks is able to specify timestamps. This mechanism, however, only indicates the relative time at which a particular media sample was captured, and not the absolute time at which it is to be reproduced. Moreover, the inclusion of an absolute timestamp in periodic RTCP messages only provides synchronization across multiple streams to a single endpoint (e.g. audio and video lip synchronization), and not synchronization of the same stream to multiple endpoints. Additionally, the RTCP wall clock time is sent only periodically, and may not be available at the time the initial packet is reproduced.
One embodiment of the front of a PTT end device used in the network of
A method of time aligning group reproduction of an audio stream across homogeneous and heterogeneous end devices is shown in
Independent of the technology used, multiple end devices are collocated in a pack that reproduces the same talkburst from a transmitting end device. Each collocated end device receives the same reproduced talkburst from the neighboring receiving end devices and aligns its reproduced talkburst to that of its neighbors. The end devices may be portable, such as that shown in
As shown in
As used herein, the lagging device in the pack is the end device whose reproduced talkburst is heard last by listening end devices. The leading device is the end device whose reproduced talkburst is heard first by listening end devices. If both narrowband circuit-based and broadband packet-based end devices are present in the pack, audio delay in the broadband devices tends to be longer than in the narrowband devices. In one embodiment, the end devices in the pack align their reproduction with that of the lagging device. In this case, the end devices slow down their reproduction during the alignment process. Although this may increase end-to-end delay (i.e., delay between audio being received by the transmitter and being reproduced by the receiving end devices), this technique imposes no requirements on packet delivery time to each end device.
As all end devices are configured to align with the lagging end device, a unidirectional shift in time occurs when aligning to the lagging device. This unidirectional time shift ensures that the end devices do not oscillate indefinitely in attempts to synchronize with one another.
Specifically, in the embodiment described, the sampled audio received from the additional microphone on the Nth end device when in listening mode is y(n). Assuming the end device receives compressed audio OTA in packets (e.g., the end device is a broadband IP device), the end device reconstitutes linear pulse code modulation (PCM) samples from the received compressed audio OTA, resulting in a sampled stream of PCM audio. This stream is denoted xi(n) (i being at the ith end device in the pack). Each of the devices (device, device2, device3, etc.) has a reconstituted stream x1(n), x2(n), x3(n), etc. Therefore, if x1(n) is being played out of device1's speaker; x1(n) is demodulated OTA audio that was sent to device, if x2(n) is being played out of device2's speaker; x2(n) is demodulated OTA audio that was sent to device2, etc. This leads to the sampled audio y(n) being:
y(n)=x1(n)+x2(n)+x3(n)+ . . . xN-1(n)+n(n)
Where:
N−1=the number of devices in the pack within audible range of xN(n)
n(n)=the sampled noise other than the audio from the transmitter being played out of each device. As n(n) is uncorrelated with xi(n), this audio will eventually be ignored.
The centralized infrastructure (e.g., the controller) selects the same source to be transmitted to all end devices associated with a given talkgroup. As device, device2, device3, . . . , deviceN are located within listening distance of each other, each end device receives the same audio from the base station at roughly the same bit error rate (BER). Each end device also applies roughly the same error mitigation to the received audio independent of the particular end device. Therefore, roughly the same audio is reproduced from multiple collocated device speakers, albeit slightly misaligned in time.
If x1(n) is the most time-lagging version of the audio (which is herein used as the reference), this gives:
x2(n)=x1(n−t2)
x3(n)=x1(n−t3)
. . .
xN-1(n)=x1(n−tN-1)
xN(n)=x1(n−tN)
Where:
This yields:
y(n)=x1(n)+x1(n−t2)+x1(n−t3)+ . . . xN-1(n)+n(n)
Continuing, deviceN takes the cross-correlation of the reconstituted audio deviceN received OTA (i.e., xN(n)) with audio sampled at deviceN's microphone (i.e. y(n)). The cross-correlation of deviceN is given by:
This is to say that the lag or advancement of xN(n) played out of the speaker of device1 relative to the audio played out of deviceN's speaker is shown by a peak at cN(tN), the lag or advancement of xN(n) played out of the speaker of device2 relative to the audio played out of deviceN's speaker is shown by a peak at cN(tN−t2), . . . , etc. If no peaks are present in the cross-correlation (other than at cN(0)), x2(n), x3(n), . . . ), xN(n) are sufficiently attenuated streams of audio. If x2(n), x3(n), . . . , xN(n) are sufficiently attenuated, the audio from these devices may not interfere with the listener at device1's ability to discern the usability of the x1(n) audio stream.
In addition to the audio stream being used for time alignment, noise (e.g. n(n)) common at both the source microphone of the transmitter and at the microphone where y(n) is sampled can serve to provide the common element for audio alignment.
To gauge the threshold used to distinguish the c(n) peaks, the noise floor f(0) can be determined by:
In the maximum term of the cross-correlation summation, the number of samples M is chosen to include the maximum possible differential delay. For example, if the differential audio processing delay incurred by varying unicast packet delay arrivals is 240 msecs, the differential processing delay is 10 msecs, and the PCM audio sample rate is 8 ksamples/sec, then M is at least: (0.24+0.01=0.250)×8000=2000 samples.
In the embodiment described, the lagging end device in the pack is the master to which all other radios time delay and align. In this case, after all of the c(n) values are accrued and the peaks are determined, the peak with the largest delay is chosen i.e., the peak whose t value is the largest. Upon determination that tN is the largest, deviceN then delays its audio tN samples to be aligned with device1. Similarly, the ci(n) peaks for each end device cause the devices to shift their audio to the most lagging device. This causes a strong-cross correlation peak as the audio reproductions from various end devices shift to the audio reproduction with the greatest lag.
End devices can align their output waveforms one or more times per talkburst (i.e., at multiple times during a particular talkburst). Alternatively, the end devices can align their output waveforms once every predetermined number of talkbursts (e.g., every 2, 3, 4, etc. . . . talkbursts). Once the waveforms of the end devices are aligned with that of their neighbors, relative timestamps embedded in the steam (such as those provided by RTP or the circuit nature of LMR) generally continue to keep the waveform in alignment. Minor clock variances of a few tens of milliseconds are not noticeable, as the human brain generally ignores up to 30 ms of time offset (nominally delays of greater than about 50-100 ms are noticeable). End devices may attempt to maintain audio quality during the alignment process by employing known time compression and/or expansion techniques to fill/remove audio as desired over this relatively small interval while maintaining the integrity of the overall voice message.
When the end devices align their output waveforms, the alignment may be set to occur after a particular time period. In such an embodiment, an internal counter in the end device increments or decrements by a predetermined amount and then initiates autocorrelation at the next free time. Thus, for example, if the end device is receiving a talkburst or is otherwise occupied (e.g., performing maintenance), auto-correlation is not initiated until after the end of the talkburst or time period of being occupied. Such an embodiment also permits the temporal alignment to be maintained without additional processing if a call ends and the hang time (the time between the end of a talkburst and the beginning of the next talkburst of any users on the system) has not been exceeded.
During cross-correlation, assuming the end devices align to the lagging end device, the audio may slow down for a short amount of time until aligned with the lagging end device. This slowdown may provide a gradual transition to the lagging end device for the time period over which the alignment occurs (hereinafter referred to as the alignment time) so as to provide non-noticeable distortion of the audio. Alternatively, the audio for the lagging end device may be suspended for the time difference between the particular end device and the lagging end device to align the particular end device to the lagging end. The alignment time is thus dependent on the time difference between the lagging end device and the end device being synchronized to the lagging end device as well as the length of time to achieve the time difference (which depends on the amount of distortion desired).
In some embodiments, the audio is slowed down or suspended over a continuous period. In other embodiments, the slowdown or suspension may occur over a number of shorter intervals between successive talkbursts. This latter implementation extends the alignment time but can reduce noticeability to a user.
During the alignment time, the initial portion of the talkburst may be muddled by the unaligned pack audio. When alignment occurs, the initial portion of the talkburst used in the cross-correlation may be ignored by internal correction mechanisms—that is, the audio from misaligned end devices starts off muddled and transitions to aligned audio without changing the talkburst. In other embodiments, when alignment occurs, the talkburst may be restarted such that the initial portion of the talkburst is repeated and the talkburst continues after this repetition.
In another embodiment, if analysis of the cross-correlation determines that peaks above the threshold are present, the volumes of the end devices that are not the lagging end device are automatically muted or otherwise reduced to a level below that causing the associated peak to be greater than the threshold by an internal volume reduction mechanism in each of the end devices. The volume may gradually increase from the reduced level in proportion with decreasing time shift from the lagging end device or may increase to the initial volume setting on each of the end devices once alignment is completed.
In some cases, the end devices may contain a locator such as a Global Positioning System (GPS) unit embedded therein. The use of locators may be relatively expensive and bulky, as well as being dependent on maintaining constant visibility to a satellite constellation. While these problems make it impractical to equip all end devices with locators, nevertheless, locators are being incorporated to a greater and greater extent in various end devices.
For each end device in the pack that contains such a locator, the locator may be used in conjunction with the cross-correlation to provide time alignment and/or volume control. Such an embodiment may be useful, for example, if the microphone(s) of a particular end device that capture the cross-correlation audio becomes muffled. In this case, although the loudspeakers from other end devices in the pack may be broadcasting loudly enough to normally cause the peaks to be above threshold (and thus the reproduced audio from these end devices to be audible to other users), the peaks may appear to be below threshold. This leads to the end device with the muffled microphone remaining unaligned and consequently being a distraction.
Such a problem may be alleviated if a locator is used. For example, the use of the locator permits the threshold to be adjusted for end devices that are within a predetermined radius of other end devices in the pack. The volume of the other end devices may also be reduced so long as they are within the particular radius. Further, a ripple-type effect during time alignment may occur with increasing distance from the lagging end device if not all of the end devices in the pack produce peaks that are above threshold. The use of a locator may avoid such a problem, permitting simultaneous time alignment for all of the end devices in the pack.
In certain circumstances, it may be desirable to use multiple cross-correlations. For example, while the frequency response characteristics of I/O devices (e.g., microphones, loudspeakers) do not tend to vary greatly for individual end devices of the same family of devices (e.g., PTT end devices supplied by a particular company or manufacturer), these characteristics may vary significantly more between different families of end devices especially as different I/O devices are used. As the thresholds may thus differ, it may be desirable in one embodiment to run different cross-correlations for a selected number of families of devices dependent on the different frequency response characteristics of the I/O devices.
As above, each end device may contain one or more internal receivers and one or more microphones. At least one of the microphones is used for noise cancellation. In one embodiment, the internal receivers and the cancellation microphone are sources of audio streams (hereinafter referred to as audio channels). Only one audio channel (except that from the cancellation microphone) may be the primary audio channel which sources the primary audio stream. The primary audio stream contains the audio presented to the subscriber user. The primary audio stream is used as the reference for the correlation algorithm. Also the primary audio may be attenuated (the attenuationFactor can be between 0 and 1 inclusive and is multiplied times each sample in the primaryAudioStream). The goal is to determine if an audio stream is present on an audio channel and the primary audio channel. If a stream is present on an audio channel and the stream on the audio channel is considered proximally close enough to affect the audio quality of the primary audio, a compensation algorithm (correlation or attenuation) is run on the primary audio stream.
In
In
The algorithm 400 of
If an audio stream has been detected, audioChannelList(0).detected is set to TRUE at step 412. If an audio stream has been detected at step 412, at step 414, whether the audio stream source is at most MAX_AUDIO_CHANNEL_DISTANCE is determined. If it is not greater than MAX_AUDIO_CHANNEL_DISTANCE, then audioChannelList(i).compensation is set to TRUE at step 416 and if it is greater than MAX_AUDIO_CHANNEL_DISTANCE, then audioChannelList(i).compensation is set to FALSE at step 418. After setting audioChannelList(i).compensation at either step 416 or 418, “i” is incremented at step 410.
After incrementing “i” at step 410, at step 420 the algorithm 400 determines whether any other audio stream sources (audio channels) are present in the array. Thus, at step 420, the current value of “i” (after being incremented at step 410) is compared with the value of NUM_AUDIO_CHANNEL_LIST_SIZE. If it is determined that the current value of “i” is less than NUM_AUDIO_CHANNEL_LIST_SIZE at step 420 (i.e., more audio channels are present), then the algorithm 400 returns to step 406 for the new audio channel. If it is determined that the current value of “i” is not less than NUM_AUDIO_CHANNEL_LIST_SIZE at step 420 (i.e., no more audio channels are present), then it is determined at step 422 whether a primary audio stream is being presented to the speaker for audio reproduction. If at step 422 it is determined that a primary audio stream is not being presented to the speaker for audio reproduction, the algorithm 400 returns to step 404. If at step 422 it is determined that a primary audio stream is being presented to the speaker for audio reproduction, at step 424 the algorithm 400 runs the compensation algorithm of
The compensation algorithm 500 of
Turning to
If it is determined that attenuation is not to be applied at step 506, it is determined whether correlation is to be applied by determining if audioChannelList(i).compensationType=CORRELATION at step 508. If it is determined that correlation is not to be applied at step 508, the correction algorithm 500 proceeds to step 512, where the value of “i” is incremented. If it is determined that correlation is to be applied at step 508, the correction algorithm 500 sets runCorrelationCompensationAlgorithmFlag to TRUE at step 510 and then continues to step 512, where the value of “i” is incremented.
After incrementing “i” at step 512, at step 520 the compensation algorithm 500 determines whether any other audio stream sources (audio channels) are present in the array. Thus, at step 520, the current value of “i” (after being incremented at step 512) is compared with the value of NUM_AUDIO_CHANNEL_LIST_SIZE. If it is determined that the current value of “i” is less than NUM_AUDIO_CHANNEL_LIST_SIZE at step 520 (i.e., more audio channels are present), then the compensation algorithm 500 returns to step 504 for the new audio channel to determine whether the new audio channel is to be compensated. If it is determined that the current value of “i” is not less than NUM_AUDIO_CHANNEL_LIST_SIZE at step 520 (i.e., no more audio channels are present), then it is determined at step 522 whether correlation is to be applied (i.e., runCorrelationCompensationAlgorithmFlag is TRUE) for any audio channel. In other words, the loop goes through and determines if at least one event exists where runCorrelationCompensationAlgorithmFlag should be set to True. In other words, the loop goes through every element in the audioChannelList looking for at least one in which the CorrelationCompensationAlgorithm is to be run. Once set to True in the loop, the flag remains True. The flag can thus be set to False (i.e., set to True zero times) or set to True (i.e., set to True once, twice, thrice, etc.). The box 522 checks to see if the runCorrelationCompensationAlgorithmFlag was set to True at least once. If it is determined at step 522 that correlation is not to be applied (i.e., the flag is False), the compensation algorithm 500 terminates. If it is determined at step 522 that correlation is to be applied (i.e., the flag is True), the correlation compensation algorithm (delay sensitive or delay-insensitive as programmed) is executed at step 524 before the compensation algorithm 500 terminates.
Another flowchart of a method of compensating for temporally misaligned audio in an end device is shown in
Activation and deactivation of the correlation features described herein may further be provided. Selection may be provided by an input on the end device and thus set by the user of the particular end device. Alternatively, the selection may be set externally, e.g., by the user that initiated the talkgroup, the leader of the talkgroup, a talkgroup configuration, or a default server setting. Selection may thus be effective on a call-to-call basis or for an extended period of time. In the event that multiple conflicting selections exist, selection priorities may be pre-established and stored in the server or end device to determine which selection is to be used.
Although OTA streams have been described herein, similar techniques may be used for signals provided via other short range communication paths. For example, a PAN using short range communications such as WiFi or Bluetooth connections may be used for time alignment instead of OTA audio. End devices employing this connectivity may provide a beacon or announcement for time alignment prior to an actual audio stream being reproduced by the end devices in the pack.
Although audio transmissions have been described herein, similar techniques may be used for other media presentations. The media transmissions may contain audio, in which case the OTA method described above may be used. Alternatively, if beacons/announcements are used, the media transmissions may be provided without audio. The use of the algorithms may depend on the system. For example, as time shifting adds audio throughput delay, it may be more useful for more delay insensitive systems. Attenuation, on the other hand, may be a better to use for audio throughput delay sensitive systems.
It will be understood that the terms and expressions used herein have the ordinary meaning as is accorded to such terms and expressions with respect to their corresponding respective areas of inquiry and study except where specific meanings have otherwise been set forth herein. Relational terms such as first and second and the like may be used solely to distinguish one entity or action from another without necessarily requiring or implying any actual such relationship or order between such entities or actions. The terms “comprises,” “comprising,” or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. An element proceeded by “a” or “an” does not, without further constraints, preclude the existence of additional identical elements in the process, method, article, or apparatus that comprises the element.
Those skilled in the art will recognize that a wide variety of modifications, alterations, and combinations can be made with respect to the above described embodiments without departing from the spirit and scope of the invention defined by the claims, and that such modifications, alterations, and combinations are to be viewed as being within the scope of the inventive concept. Thus, the specification and figures are to be regarded in an illustrative rather than a restrictive sense, and all such modifications are intended to be included within the scope of present invention. The benefits, advantages, solutions to problems, and any element(s) that may cause any benefit, advantage, or solution to occur or become more pronounced are not to be construed as a critical, required, or essential features or elements of any or all the claims. The invention is defined solely by any claims issuing from this application and all equivalents of those issued claims.
The Abstract of the Disclosure is provided to allow the reader to quickly ascertain the nature of the technical disclosure. It is submitted with the understanding that it will not be used to interpret or limit the scope or meaning of the claims. In addition, in the foregoing Detailed Description, it can be seen that various features are grouped together in various embodiments for the purpose of streamlining the disclosure. This method of disclosure is not to be interpreted as reflecting an intention that the claimed embodiments require more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive subject matter lies in less than all features of a single disclosed embodiment. Thus the following claims are hereby incorporated into the Detailed Description, with each claim standing on its own as a separately claimed subject matter.
Number | Name | Date | Kind |
---|---|---|---|
7720232 | Oxford | May 2010 | B2 |
7894511 | Zurek et al. | Feb 2011 | B2 |
20060013407 | Peavey et al. | Jan 2006 | A1 |
20080037674 | Zurek | Feb 2008 | A1 |
Number | Date | Country | |
---|---|---|---|
20100119083 A1 | May 2010 | US |