There is considerable interest in identifying and/or measuring the receipt of, and or exposure to, audio data by an audience in order to provide market information to advertisers, media distributors, and the like, to verify airing, to calculate royalties, to detect piracy, and for any other purposes for which an estimation of audience receipt or exposure is desired. Additionally, there is a considerable interest in providing content and/or performing actions on devices based on media exposure detection. The emergence of multiple, overlapping media distribution pathways, as well as the wide variety of available user systems (e.g. PC's, PDA's, portable CD players, Internet, appliances, TV, radio, etc.) for receiving audio data and other types of data, has greatly complicated the task of measuring audience receipt of, and exposure to, individual program segments. The development of commercially viable techniques for encoding audio data with program identification data provides a crucial tool for measuring audio data receipt and exposure across multiple media distribution pathways and user systems.
Recently, advances have been made in creating universal media codes, commonly known as “trackable asset cross-platform identification,” (or TAXI) in order to track media assets such as videos, music, advertisements, etc. across multiple platforms. Currently, the Coalition for Innovative Media Measurement (CIMM) is developing TAXI to establish open and interoperable standards upon which incumbent business applications and supporting operational processes can more effectively adapt to the requirements of asset tracking. By utilizing a standardized (universal) cross-platform asset identification techniques, systems may be enabled to simplify a variety of business, technical and operational challenges. Briefly, TAXI is configured to identify entertainment and advertising assets across distribution platforms and establish standards for multi-channel asset tracking. It acts as a UPC code for all audio/video programming and advertising assets, and is based on the entertainment identifier registry (EIDR) and/or Ad-ID formats, among others. It operates to establish cross-sector protocols for video asset registration, ID flow-through and transaction measurement and reporting and may be a foundation layer for many critical content and advertising applications.
One of the issues with standardized cross-platform asset identification technologies is that non-audio identification formats are not easily transposed into audio formats. For example, ID's for media content may contain a code that is non-acoustically encoded as metadata into the content before transmission, broadcast, multicast, etc. One exemplary code, used under the Entertainment Identifier Registry (EIDR) format (http://eidr.org/), utilizes an EMA metadata structure to provide data fields in communicating descriptive, logical, and technical metadata regarding media from content providers. In certain cases, metadata includes elements that cover typical definitions of media, particularly movies and television, and may have two parts, namely, basic metadata and digital asset metadata. Basic metadata includes descriptions such as title and artists. It describes information about the work independent of encoding. Digital Asset metadata describes information about individual encoded audio, video and subtitle streams, and other media included. Package and File Metadata describes a single possible packaging scenario and ties in other metadata types, such as ratings and parental control information. Other types of metadata, such as “common metadata,” are designed to provide definitions to be inserted into other metadata systems, such as EIDR metadata and UltraViolet metadata. Downstream users may then define additional metadata to cover areas not included in common metadata.
While such metadata is readily detectable via data connection, it may not be detectible, or may not even exist, in the audio itself. Accordingly, there is a need to provide universal identification codes in audio. Furthermore, as universal identification codes are generally capable of carrying more information than standard audio codes, it would be advantageous to have an encoding system capable of carrying such codes in audio. Such configurations would allow the transposition of non-audio universal codes into audio formats and provide more robust information for audience measurement purposes.
For this application, the following terms and definitions shall apply, both for the singular and plural forms of nouns and for all verb tenses:
The term “data” as used herein means any indicia, signals, marks, domains, symbols, symbol sets, representations, and any other physical form or forms representing information, whether permanent or temporary, whether visible, audible, acoustic, electric, magnetic, electromagnetic, or otherwise manifested. The term “data” as used to represent predetermined information in one physical form shall be deemed to encompass any and all representations of the same predetermined information in a different physical form or forms.
The term “audio data” as used herein means any data representing acoustic energy, including, but not limited to, audible sounds, regardless of the presence of any other data, or lack thereof, which accompanies, is appended to, is superimposed on, or is otherwise transmitted or able to be transmitted with the audio data.
The term “network” as used herein means networks of all kinds, including both intra-networks, such as a single-office network of computers, and inter-networks, such as the Internet, and is not limited to any particular such network.
The term “processor” as used herein means data processing devices, apparatus, programs, circuits, systems, and subsystems, whether implemented in hardware, tangibly-embodied software, or both.
The terms “communicate” and “communicating” as used herein include both conveying data from a source to a destination, as well as delivering data to a communications medium, system or link to be conveyed to a destination. The term “communication” as used herein means the act of communicating or the data communicated, as appropriate.
The terms “coupled”, “coupled to”, and “coupled with” shall each mean a relationship between or among two or more devices, apparatus, files, programs, media, components, networks, systems, subsystems, and/or means, constituting any one or more of (a) a connection, whether direct or through one or more other devices, apparatus, files, programs, media, components, networks, systems, subsystems, or means, (b) a communications relationship, whether direct or through one or more other devices, apparatus, files, programs, media, components, networks, systems, subsystems, or means, or (c) a functional relationship in which the operation of any one or more of the relevant devices, apparatus, files, programs, media, components, networks, systems, subsystems, or means depends, in whole or in part, on the operation of any one or more others thereof.
In one or more exemplary embodiment, a method of encoding audio data is disclosed, comprising the steps of receiving a persistent identifier code comprising data for uniquely identifying a media object; generating audio code components comprising frequency characteristics to represent symbols of the persistent identifier code; and psychoacoustically embedding the audio code components into an audio portion of the media object to include the persistent identifier code within one or more of a plurality of encoding layers. The persistent identifier code may comprise a registry prefix and a registry suffix, wherein the registry suffix comprises data uniquely identifying the media object from a plurality of other media objects. The persistent identifier code may be received over a computer network or from a registry database. Alternately, the received persistent identifier code may be detected from a non-audio data portion of the media object.
In one or more other exemplary embodiments, methods for decoding audio data are disclosed comprising the steps of receiving audio data associated with a media object in a device, wherein the audio data comprises a psychoacoustically embedded persistent identifier code comprising data for uniquely identifying the media object; transforming the audio data into a frequency domain; and processing the transformed audio data to detect the persistent identifier code, wherein the persistent identifier code comprises audio code components having frequency characteristics representing symbols of the persistent identifier code, wherein the persistent identifier code is detected from one or more of a plurality of encoded layers within the audio data. Again, the persistent identifier code comprises a registry prefix and a registry suffix, wherein the registry suffix comprises data uniquely identifying the media object from a plurality of other media objects. In some embodiments, metadata may be called from a registry database in response to detecting the persistent identifier code, where the metadata comprises information relating to one or more media object types, media object relationships, descriptive metadata and encoding metadata relating to the media object.
The present invention is illustrated by way of example and not limitation in the figures of the accompanying drawings, in which like references indicate similar elements and in which:
Various embodiments of the present invention will be described herein below with reference to the accompanying drawings. In the following description, well-known functions or constructions are not described in detail since they would obscure the invention in unnecessary detail.
Turning to
In the exemplary embodiment of
In the embodiment of
Turning to
Handle System 212 in
Core registry 202 provides a module for customization and configuration of the media asset/object ID repository. Under one embodiment, core registry 202 performs various functions including registration, generation of unique identifiers, indexing, object storage management, and access control. As explained above, each media object is assigned a unique ID upon registration. Media object repository 203 may store and provides access to registered objects, which preferably include collections of metadata, and not necessarily the media assets themselves. The metadata includes standard object information, relationships, and access control settings. De-duplication module 204 is a module that may be called by core registry 202 to check for uniqueness of a newly created or modified object. The de-duplication module provides information in response to a registry request indicating if a record is a duplicate, a potential duplicate, or unique. Core registry 202, media object repository 202 and de-duplication module 204 are preferably embodied in one or more servers 201 or other devices capable of being accessed over a computer network. Core registry 202 preferably contains one or more APIs to allow access to the system.
For example, an administrative API may be provided to allow calls to the system to manage accounts, users and access control lists via an administration console 208. A public API may also be provided for API calls to provide a user interface and allow applications (207) to make requests from the registry. Under one embodiment, the API may be configured under a representational state transfer architecture to provide scalability of component interaction and independent deployment. Service calls may include individual or batched calls. Bulk media asset registrations may done via bulk registration 206, which is preferably configured to submit many (e.g., up to 100,000) registration requests at one time. The system of
Under one embodiment, the system of
Regarding media object relationships, multiple media objects may be connected to each other metadata, where the relationships may be classified according to inheritance or dependence. For inheritance, a media object on which the relationship exists can inherit basic metadata fields from the object to which the relationship refers. Preferably, objects in the registry are related to each other as nodes in a tree, where items in a tree can inherit certain fields from their parent. For example, all of the seasons and episodes of a series may be related in a tree rooted in the series object. Additional non-parental relationships, such as one object being included in a composite with items from outside its own hierarchy, are also possible. An inheritance relationship may exist on an object on such characteristics as isSeasonOf, isEpisodeOf, isEditOf, isSongOf, isLanguageVariantOf, isEncodingOf, and isClipOf, etc. Regarding dependence, a media object may depend on another object by including a reference to it. For example, when encoding A refers to encoding B by reference, A is dependent on B, and when composite C includes Clip D, C is dependent on D.
Turning to
In one embodiment, binding may be performed in 303 prior to storage 304 on a media object that is configured to have additional data embedded into it (e.g., subtitles, text, graphics), and/or is configured to be transmitted across a computer network, such as the Internet. Generally speaking, data binding associates a value from a source object to a property on a destination object. The source property can be any suitable data for binding and the destination can be a dependency property. When using binding techniques based on computer network protocols (e.g., Windows Presentation Foundation (or WPF)), the source and binding to another object type may be configured such that the WPF object has change notification. Thus, once binding has occurred with a WPF element property as a source, when the source changes, the destination property will automatically be updated. Elements can be bound to data from a variety of data sources in the form of common language runtime (CLR) objects and XML. Content control classes such as Button (windows control button) and items controls classes (control that can be used to present a collection of items) such as ListBox (list of selectable items) and ListView (control that displays a list of data items) may be configured with built-in functionality to enable flexible styling of single data items or collections of data items. Sort, filter, and group views can also be generated on top of the data.
Registered media assets in 304 may subsequently be transmitted and/or broadcast to users over different mediums including streaming media, broadcast and content delivery network (CDN), where media asset ID's are collected using suitable ID extraction software tangibly embodied on a processor-based hardware device. In the case of CDNs, media asset IDs are detected in 308 from the server side and/or the user side via beacons, cookies, tags or the like. Media asset IDs from broadcast may be detected in 307 from the user side via set-top-box, intelligent TV and the like via return path, return channel or back channel data. For streaming media, media asset ID's may be detected in 306 (from a server and/or user side) via log panel data or the like.
Turning to
Media asset ID codes may be added to the audio data using encoding techniques suitable for encoding audio signals that are reproduced as acoustic energy, such as, for example, the techniques disclosed in U.S. Pat. No. 5,764,763 to Jensen, et al., and modifications thereto, which is assigned to the assignee of the present invention and which is incorporated herein by reference. Other appropriate encoding techniques are disclosed in U.S. Pat. No. 5,579,124 to Aijala, et al., U.S. Pat. Nos. 5,574,962, 5,581,800 and 5,787,334 to Fardeau, et al., U.S. Pat. No. 5,450,490 to Jensen, et al., and U.S. patent application Ser. No. 09/318,045, in the names of Neuhauser, et al., each of which is assigned to the assignee of the present application and all of which are incorporated herein by reference.
In accordance with certain advantageous embodiments of the invention, this media asset ID code is encoded continuously throughout a time base of a media asset segment. In accordance with certain other advantageous embodiments of the invention, this media asset ID code occurs repeatedly, either at a predetermined interval or at a variable interval or intervals. These types of encoded signals have certain advantages that may be desired, such as, for example, increasing the likelihood that a program segment will be identified when an audience member is only exposed to part of the media asset segment, or, further, determining the amount of time the audience member is actually exposed to the segment. In another advantageous embodiment, media asset ID codes may be broken into multiple segments, where the first of these codes may be a media asset ID code prefix, followed by a media asset ID code suffix, where they may be encoded continuously or repeatedly in a predetermined order or other suitable arrangement. In another advantageous embodiment, the audio data of the media asset may include two (or more) different media asset ID codes. This type of encoded data has certain advantages that may be desired, such as, for example, using the codes to identify two different program types in the same signal, such as a television commercial (e.g., using Ad-ID code format) that is being broadcast along with a movie on a television (e.g., using an EIDR code format), where it is desired to monitor exposure to both the movie and the commercial.
After devices 306, 307 and/or 308 receive the audio data, in certain embodiments, they facilitate reproduction of the audio data as acoustic audio data, and preferably contain decoding hardware and/or software capable of decoding the media asset ID code(s), described in greater detail below in
With regard to encoding universal media ID codes into audio,
When utilizing a multi-layered message for universal media ID codes, one, two, three or more layers may be present in an encoded data stream, and each layer may be used to convey different data. Turning to
Second layer 502 of message 500 is illustrated having a similar configuration to layer 501, where each symbol set includes two synchronization symbols 509, 511, a larger number of data symbols 510, 512, and time code symbols 513. The third layer 503 includes two synchronization symbols 514, 516, and a larger number of data symbols 515, 517. The data symbols in each symbol set for the layers (501-503) should preferably have a predefined order and be indexed (e.g., 1, 2, 3). The code components of each symbol in any of the symbol sets should preferably have selected frequencies that are different from the code components of every other symbol in the same symbol set. Under one embodiment, none of the code component frequencies used in representing the symbols of a message in one layer (e.g., Layer 1 501) is used to represent any symbol of another layer (e.g., Layer2 502). In another embodiment, some of the code component frequencies used in representing symbols of messages in one layer (e.g., Layer3 503) may be used in representing symbols of messages in another layer (e.g., Layer1 501). However, in this embodiment, it is preferable that “shared” layers have differing formats (e.g., Layer3 503, Layer1 501) in order to assist the decoder in separately decoding the data contained therein.
Sequences of data symbols within a given layer are preferably configured so that each sequence is paired with the other and is separated by a predetermined offset. Thus, as an example, if data 505 contains code 1, 2, 3 having an offset of “2”, data 507 in layer 501 would be 3, 4, 5. Since the same information is represented by two different data symbols that are separated in time and have different frequency components (frequency content), the message may be diverse in both time and frequency. Such a configuration is particularly advantageous where interference would otherwise render data symbols undetectable. Under one embodiment, each of the symbols in a layer have a duration (e.g., 0.2-0.8 sec) that matches other layers (e.g., Layer1 501, Layer2 502). In another embodiment, the symbol duration may be different (e.g., Layer 2 502, Layer 3 503). During a decoding process, the decoder detects the layers and reports any predetermined segment that contains a code.
For received audio signals in the time domain, decoder 600 transforms such signals to the frequency domain by means of function 606. Function 606 preferably is performed by a digital processor implementing a fast Fourier transform (FFT) although a direct cosine transform, a chirp transform or a Winograd transform algorithm (WFTA) may be employed in the alternative. Any other time-to-frequency-domain transformation function providing the necessary resolution may be employed in place of these. It will be appreciated that in certain implementations, functions may also be carried out by filters, by an application specific integrated circuit, or any other suitable device or combination of devices. Function 606 may also be implemented by one or more devices which also implement one or more of the remaining functions illustrated in
The frequency domain-converted audio signals are processed in a symbol values derivation function 610, to produce a stream of symbol values for each code symbol included in the received audio signal. The produced symbol values may represent, for example, signal energy, power, sound pressure level, amplitude, etc., measured instantaneously or over a period of time, on an absolute or relative scale, and may be expressed as a single value or as multiple values. Where the symbols are encoded as groups of single frequency components each having a predetermined frequency, the symbol values preferably represent either single frequency component values or one or more values based on single frequency component values. Function 610 may be carried out by a digital processor, such as a DSP which advantageously carries out some or all of the other functions of decoder 600. However, the function 610 may also be carried out by an application specific integrated circuit, or by any other suitable device or combination of devices, and may be implemented by apparatus apart from the means which implement the remaining functions of the decoder 600.
The stream of symbol values produced by the function 610 are accumulated over time in an appropriate storage device on a symbol-by-symbol basis, as indicated by function 616. In particular, function 616 is advantageous for use in decoding encoded symbols which repeat periodically, by periodically accumulating symbol values for the various possible symbols. For example, if a given symbol is expected to recur every X seconds, the function 616 may serve to store a stream of symbol values for a period of nX seconds (n>1), and add to the stored values of one or more symbol value streams of nX seconds duration, so that peak symbol values accumulate over time, improving the signal-to-noise ratio of the stored values. Function 616 may be carried out by a digital processor, such as a DSP, which advantageously carries out some or all of the other functions of decoder 600. However, the function 610 may also be carried out using a memory device separate from such a processor, or by an application specific integrated circuit, or by any other suitable device or combination of devices, and may be implemented by apparatus apart from the means which implements the remaining functions of the decoder 600.
The accumulated symbol values stored by the function 616 are then examined by the function 620 to detect the presence of an encoded message and output the detected message at an output 626. Function 620 can be carried out by matching the stored accumulated values or a processed version of such values, against stored patterns, whether by correlation or by another pattern matching technique. However, function 620 advantageously is carried out by examining peak accumulated symbol values and their relative timing, to reconstruct their encoded message. This function may be carried out after the first stream of symbol values has been stored by the function 616 and/or after each subsequent stream has been added thereto, so that the message is detected once the signal-to-noise ratios of the stored, accumulated streams of symbol values reveal a valid message pattern.
The decoding configuration disclosed herein is particularly well adapted for detecting code symbols each of which includes a plurality of predetermined frequency components, e.g. ten components, within a frequency range of 1000 Hz to 3000 Hz. In certain embodiments, the decoder may be designed specifically to detect a message having a specific sequence wherein each symbol occupies a specified time interval (e.g., 0.5 sec). In this exemplary embodiment, it is assumed that the symbol set consists of twelve symbols, each having ten predetermined frequency components, none of which is shared with any other symbol of the symbol set. It will be appreciated that the decoder may readily be modified to detect different numbers of code symbols, different numbers of components, different symbol sequences and symbol durations, as well as components arranged in different frequency bands.
In order to separate the various components, the DSP repeatedly carries out FFTs on audio signal samples falling within successive, predetermined intervals. The intervals may overlap, although this is not required. In an exemplary embodiment, ten overlapping FFT's are carried out during each second of decoder operation. Accordingly, the energy of each symbol period falls within five FFT periods. The FFT's are preferably windowed, although this may be omitted in order to simplify the decoder. The samples are stored and, when a sufficient number are thus available, a new FFT is performed. In this embodiment, the frequency component values are produced on a relative basis. That is, each component value is represented as a signal-to-noise ratio (SNR), produced as follows. The energy within each frequency bin of the FFT in which a frequency component of any symbol can fall provides the numerator of each corresponding SNR Its denominator is determined as an average of adjacent bin values. For example, the average of seven of the eight surrounding bin energy values may be used, the largest value of the eight being ignored in order to avoid the influence of a possible large bin energy value which could result, for example, from an audio signal component in the neighborhood of the code frequency component. Also, given that a large energy value could also appear in the code component bin, for example, due to noise or an audio signal component, the SNR is appropriately limited. In this embodiment, if SNR>6.0, then SNR is limited to 6.0, although a different maximum value may be selected.
The ten SNR's of each FFT and corresponding to each symbol which may be present, are combined to form symbol SNR's which are stored in a circular symbol SNR buffer. In certain embodiments, the ten SNR's for a symbol are simply added, although other ways of combining the SNR's may be employed. The symbol SNR's for each of the twelve symbols are stored in the symbol SNR buffer as separate sequences, one symbol SNR for each FFT for 50 μl FFT's. After the values produced in the 50 FFT's have been stored in the symbol SNR buffer, new symbol SNR's are combined with the previously stored values, as described below. In certain advantageous embodiments, the stored SNR's are adjusted to reduce the influence of noise, although this step may be optional. In this optional step, a noise value is obtained for each symbol (row) in the buffer by obtaining the average of all stored symbol SNR's in the respective row each time the buffer is filled. Then, to compensate for the effects of noise, this average or “noise” value is subtracted from each of the stored symbol SNR values in the corresponding row. In this manner, a “symbol” appearing only briefly, and thus not a valid detection, is averaged out over time.
After the symbol SNR's have been adjusted by subtracting the noise level, the decoder attempts to recover the message by examining the pattern of maximum SNR values in the buffer. In certain embodiments, the maximum SNR values for each symbol are located in a process of successively combining groups of five adjacent SNR's, by weighting the values in the sequence in proportion to the sequential weighting (6 10 10 10 6) and then adding the weighted SNR's to produce a comparison SNR centered in the time period of the third SNR in the sequence. This process is carried out progressively throughout the fifty FFT periods of each symbol. For example, a first group of five SNR's for a specific symbol in FFT time periods (e.g., 1-5) are weighted and added to produce a comparison SNR for a specific FFT period (e.g., 3). Then a further comparison SNR is produced using the SNR's from successive FFT periods (e.g., 2-6), and so on until comparison values have been obtained centered on all FFT periods. However, other means may be employed for recovering the message. For example, either more or less than five SNR's may be combined, they may be combined without weighing, or they may be combined in a non-linear fashion.
After the comparison SNR values have been obtained, the decoder examines the comparison SNR values for a message pattern. Under a preferred embodiment, the synchronization (“marker”) code symbols are located first. Once this information is obtained, the decoder attempts to detect the peaks of the data symbols. The use of a predetermined offset between each data symbol in the first segment and the corresponding data symbol in the second segment provides a check on the validity of the detected message. That is, if both markers are detected and the same offset is observed between each data symbol in the first segment and its corresponding data symbol in the second segment, it is highly likely that a valid message has been received. If this is the case, the message is logged, and the SNR buffer is cleared. It is understood by those skilled in the art that decoder operation may be modified depending on the structure of the message, its timing, its signal path, the mode of its detection, etc., without departing from the scope of the present invention. For example, in place of storing SNR's, FFT results may be stored directly for detecting a message.
In another embodiment, decoding/detecting of universal media code ID's may be performed via a DSP, where a repeating sequence of code symbols comprising a marker symbol followed by a plurality of data symbols are detected wherein each of the code symbols includes a plurality of predetermined frequency components and have a predetermined duration (e.g., 0.5 sec) in the message sequence. It is assumed in this example that each symbol is represented by ten unique frequency components and that the symbol set includes twelve different symbols. It is understood that this embodiment may readily be modified to detect any number of symbols, each represented by one or more frequency components. A circular buffer may be employed having a specified width and length (e.g., twelve symbols wide by 150 FFT periods long). Once the buffer is filled, new symbol SNRs each replace what are than the oldest symbol SNR values. In effect, the buffer stores a fifteen second window of symbol SNR values. Once the circular buffer is filled, its contents are examined to detect the presence of the universal media ID. The buffer may be configured to remain full continuously, so that the pattern search for codes may be carried out after every FFT.
In this example, if five symbol message repeats every 2½ seconds, each symbol repeats at intervals of 2½ seconds or every 25 FFT's. In order to compensate for the effects of burst errors and the like, the SNR's R1 through R150 are combined by adding corresponding values of the repeating messages to obtain 25 combined SNR values SNRn, n=1, 2 . . . 25, as follows:
Accordingly, if a burst error should result in the loss of a signal interval i, only one of the six message intervals will have been lost, and the essential characteristics of the combined SNR values are likely to be unaffected by this event. Once the combined SNR values have been determined, the decoder detects the position of the marker symbol's peak as indicated by the combined SNR values and derives the data symbol sequence based on the marker's position and the peak values of the data symbols. Once the message has thus been formed, the message is logged. Instead of clearing the buffer, the decoder loads a further set of SNR's in the buffer and continues to search for a message. It will be apparent from the foregoing that the decoder may be modified for different message structures, message timings, signal paths, detection modes, etc., without departing from the scope of the present invention. For example, the buffer may be replaced by any other suitable storage device; the size of the buffer may be varied; the size of the SNR values windows may be varied, and/or the symbol repetition time may vary. Also, instead of calculating and storing signal SNR's to represent the respective symbol values, a measure of each symbol's value relative to the other possible symbols, for example, a ranking of each possible symbol's magnitude, is instead used in certain advantageous embodiments.
In a further variation which is especially useful in audience measurement applications, a relatively large number of message intervals are separately stored to permit a retrospective analysis of their contents to detect a channel change. In another embodiment, multiple buffers are employed, each accumulating data for a different number of intervals for use in the decoding method. For example, one buffer could store a single message interval, another two accumulated intervals, a third four intervals and a fourth eight intervals. Separate detections based on the contents of each buffer are then used to detect a channel change.
The Abstract of the Disclosure is provided to comply with 37 C.F.R. §1.72(b), requiring an abstract that will allow the reader to quickly ascertain the nature of the technical disclosure. It is submitted with the understanding that it will not be used to interpret or limit the scope or meaning of the claims. In addition, in the foregoing Detailed Description, it can be seen that various features are grouped together in a single embodiment for the purpose of streamlining the disclosure. This method of disclosure is not to be interpreted as reflecting an intention that the claimed embodiments require more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive subject matter lies in less than all features of a single disclosed embodiment. Thus the following claims are hereby incorporated into the Detailed Description, with each claim standing on its own as a separate embodiment.
Number | Name | Date | Kind |
---|---|---|---|
4230990 | Lert, Jr. et al. | Oct 1980 | A |
4647974 | Butler et al. | Mar 1987 | A |
4677466 | Lert, Jr. et al. | Jun 1987 | A |
4697209 | Kiewit et al. | Sep 1987 | A |
4745468 | Von Kohorn | May 1988 | A |
4876592 | Von Kohorn | Oct 1989 | A |
4876736 | Kiewit | Oct 1989 | A |
4926255 | Von Kohorn | May 1990 | A |
4973952 | Malec et al. | Nov 1990 | A |
5019899 | Boles et al. | May 1991 | A |
5023929 | Call | Jun 1991 | A |
5034807 | Von Kohorn | Jul 1991 | A |
5057915 | Von Kohorn | Oct 1991 | A |
5081680 | Bennett | Jan 1992 | A |
5128752 | Von Kohorn | Jul 1992 | A |
5157489 | Lowe | Oct 1992 | A |
5227874 | Von Kohorn | Jul 1993 | A |
5245665 | Lewis et al. | Sep 1993 | A |
5249044 | Von Kohorn | Sep 1993 | A |
5283734 | Von Kohorn | Feb 1994 | A |
5331544 | Lu et al. | Jul 1994 | A |
5401946 | Weinblatt | Mar 1995 | A |
5425100 | Thomas et al. | Jun 1995 | A |
5450490 | Jensen et al. | Sep 1995 | A |
5453790 | Vermeulen et al. | Sep 1995 | A |
5481294 | Thomas et al. | Jan 1996 | A |
5512933 | Wheatley et al. | Apr 1996 | A |
5524195 | Clanton, III et al. | Jun 1996 | A |
5543856 | Rosser et al. | Aug 1996 | A |
5574962 | Fardeau et al. | Nov 1996 | A |
5579124 | Aijala et al. | Nov 1996 | A |
5581800 | Fardeau et al. | Dec 1996 | A |
5594934 | Lu et al. | Jan 1997 | A |
5629739 | Dougherty | May 1997 | A |
5659366 | Kerman | Aug 1997 | A |
5666293 | Metz et al. | Sep 1997 | A |
5682196 | Freeman | Oct 1997 | A |
5719634 | Keery et al. | Feb 1998 | A |
5734413 | Lappington et al. | Mar 1998 | A |
5740035 | Cohen et al. | Apr 1998 | A |
5764763 | Jensen et al. | Jun 1998 | A |
5787334 | Fardeau et al. | Jul 1998 | A |
5815671 | Morrison | Sep 1998 | A |
5841978 | Rhoads | Nov 1998 | A |
5848155 | Cox | Dec 1998 | A |
5850249 | Massetti et al. | Dec 1998 | A |
5872588 | Aras et al. | Feb 1999 | A |
5878384 | Johnson et al. | Mar 1999 | A |
5880789 | Inaba | Mar 1999 | A |
5893067 | Bender et al. | Apr 1999 | A |
5918223 | Blum et al. | Jun 1999 | A |
5930369 | Cox et al. | Jul 1999 | A |
5933789 | Byun et al. | Aug 1999 | A |
5956716 | Kenner et al. | Sep 1999 | A |
5956743 | Bruce et al. | Sep 1999 | A |
5966120 | Arazi et al. | Oct 1999 | A |
5974396 | Anderson et al. | Oct 1999 | A |
5978855 | Metz et al. | Nov 1999 | A |
5987855 | Dey et al. | Nov 1999 | A |
6034722 | Viney et al. | Mar 2000 | A |
6035177 | Moses et al. | Mar 2000 | A |
6049830 | Saib | Apr 2000 | A |
6055573 | Gardenswartz et al. | Apr 2000 | A |
6154209 | Naughton et al. | Nov 2000 | A |
6208735 | Cox et al. | Mar 2001 | B1 |
6216129 | Eldering | Apr 2001 | B1 |
6286036 | Rhoads | Sep 2001 | B1 |
6286140 | Ivanyi | Sep 2001 | B1 |
6298348 | Eldering | Oct 2001 | B1 |
6308327 | Liu et al. | Oct 2001 | B1 |
6331876 | Koster et al. | Dec 2001 | B1 |
6335736 | Wagner et al. | Jan 2002 | B1 |
6363159 | Rhoads | Mar 2002 | B1 |
6389055 | August et al. | May 2002 | B1 |
6400827 | Rhoads | Jun 2002 | B1 |
6411725 | Rhoads | Jun 2002 | B1 |
6421445 | Jensen et al. | Jul 2002 | B1 |
6487564 | Asai et al. | Nov 2002 | B1 |
6505160 | Levy et al. | Jan 2003 | B1 |
6512836 | Xie et al. | Jan 2003 | B1 |
6513014 | Walker et al. | Jan 2003 | B1 |
6522771 | Rhoads | Feb 2003 | B2 |
6539095 | Rhoads | Mar 2003 | B1 |
6546556 | Kataoka et al. | Apr 2003 | B1 |
6553178 | Abecassis | Apr 2003 | B2 |
6642966 | Limaye | Nov 2003 | B1 |
6647269 | Hendrey et al. | Nov 2003 | B2 |
6651253 | Dudkiewicz et al. | Nov 2003 | B2 |
6654480 | Rhoads | Nov 2003 | B2 |
6665873 | Van Gestel et al. | Dec 2003 | B1 |
6675383 | Wheeler et al. | Jan 2004 | B1 |
6683966 | Tian et al. | Jan 2004 | B1 |
6687663 | McGrath et al. | Feb 2004 | B1 |
6710815 | Billmaier et al. | Mar 2004 | B1 |
6714683 | Tian et al. | Mar 2004 | B1 |
6741684 | Kaars | May 2004 | B2 |
6748362 | Meyer et al. | Jun 2004 | B1 |
6750985 | Rhoads | Jun 2004 | B2 |
6766523 | Herley | Jul 2004 | B2 |
6795972 | Rovira | Sep 2004 | B2 |
6804379 | Rhoads | Oct 2004 | B2 |
6829368 | Meyer et al. | Dec 2004 | B2 |
6845360 | Jensen et al. | Jan 2005 | B2 |
6853634 | Davies et al. | Feb 2005 | B1 |
6871180 | Neuhauser et al. | Mar 2005 | B1 |
6871323 | Wagner et al. | Mar 2005 | B2 |
6873688 | Aarnio | Mar 2005 | B1 |
6941275 | Swierczek | Sep 2005 | B1 |
6956575 | Nakazawa et al. | Oct 2005 | B2 |
6965601 | Nakano et al. | Nov 2005 | B1 |
6968371 | Srinivasan | Nov 2005 | B1 |
6968564 | Srinivasan | Nov 2005 | B1 |
6970886 | Conwell et al. | Nov 2005 | B1 |
6996213 | De Jong | Feb 2006 | B1 |
7003731 | Rhoads et al. | Feb 2006 | B1 |
7006555 | Srinivasan | Feb 2006 | B1 |
7050603 | Rhoads et al. | May 2006 | B2 |
7051086 | Rhoads et al. | May 2006 | B2 |
7058697 | Rhoads | Jun 2006 | B2 |
7082434 | Gosselin | Jul 2006 | B2 |
7092964 | Dougherty et al. | Aug 2006 | B1 |
7095871 | Jones et al. | Aug 2006 | B2 |
7143949 | Hannigan | Dec 2006 | B1 |
7158943 | Van der Riet | Jan 2007 | B2 |
7171018 | Rhoads et al. | Jan 2007 | B2 |
7174293 | Kenyon et al. | Feb 2007 | B2 |
7185201 | Rhoads et al. | Feb 2007 | B2 |
7194752 | Kenyon et al. | Mar 2007 | B1 |
7197156 | Levy | Mar 2007 | B1 |
7215280 | Percy et al. | May 2007 | B1 |
7221405 | Basson et al. | May 2007 | B2 |
7227972 | Brundage et al. | Jun 2007 | B2 |
7254249 | Rhoads et al. | Aug 2007 | B2 |
7269564 | Milsted et al. | Sep 2007 | B1 |
7273978 | Uhle | Sep 2007 | B2 |
7317716 | Boni et al. | Jan 2008 | B1 |
7328153 | Wells et al. | Feb 2008 | B2 |
7346512 | Li-Chun Wang et al. | Mar 2008 | B2 |
7356700 | Noridomi et al. | Apr 2008 | B2 |
7363278 | Schmelzer et al. | Apr 2008 | B2 |
7369678 | Rhoads | May 2008 | B2 |
7421723 | Harkness et al. | Sep 2008 | B2 |
7440674 | Plotnick et al. | Oct 2008 | B2 |
7443292 | Jensen et al. | Oct 2008 | B2 |
7463143 | Forr et al. | Dec 2008 | B2 |
7519658 | Anglin et al. | Apr 2009 | B1 |
7592908 | Zhang et al. | Sep 2009 | B2 |
7623823 | Zito et al. | Nov 2009 | B2 |
7640141 | Kolessar et al. | Dec 2009 | B2 |
7761602 | Knight et al. | Jul 2010 | B1 |
RE42627 | Neuhauser et al. | Aug 2011 | E |
8229159 | Tourapis et al. | Jul 2012 | B2 |
8255763 | Yang et al. | Aug 2012 | B1 |
8369972 | Topchy et al. | Feb 2013 | B2 |
8768713 | Chaoui et al. | Jul 2014 | B2 |
8924995 | Ramaswamy et al. | Dec 2014 | B2 |
9015563 | Lynch et al. | Apr 2015 | B2 |
9124769 | Srinivasan | Sep 2015 | B2 |
20010044899 | Levy | Nov 2001 | A1 |
20010056573 | Kovac et al. | Dec 2001 | A1 |
20020032734 | Rhoads | Mar 2002 | A1 |
20020033842 | Zetts | Mar 2002 | A1 |
20020053078 | Holtz et al. | May 2002 | A1 |
20020056094 | Dureau | May 2002 | A1 |
20020059218 | August et al. | May 2002 | A1 |
20020062382 | Rhoads et al. | May 2002 | A1 |
20020091991 | Castro | Jul 2002 | A1 |
20020102993 | Hendrey et al. | Aug 2002 | A1 |
20020108125 | Joao | Aug 2002 | A1 |
20020111934 | Narayan | Aug 2002 | A1 |
20020112002 | Abato | Aug 2002 | A1 |
20020124246 | Kaminsky et al. | Sep 2002 | A1 |
20020138851 | Lord et al. | Sep 2002 | A1 |
20020144262 | Plotnick et al. | Oct 2002 | A1 |
20020144273 | Reto | Oct 2002 | A1 |
20020162118 | Levy et al. | Oct 2002 | A1 |
20020174425 | Markel et al. | Nov 2002 | A1 |
20020194592 | Tsuchida et al. | Dec 2002 | A1 |
20030021441 | Levy et al. | Jan 2003 | A1 |
20030039465 | Bjorgan et al. | Feb 2003 | A1 |
20030088674 | Ullman et al. | May 2003 | A1 |
20030105870 | Baum | Jun 2003 | A1 |
20030108200 | Sako | Jun 2003 | A1 |
20030115598 | Pantoja | Jun 2003 | A1 |
20030177488 | Smith et al. | Sep 2003 | A1 |
20030185232 | Moore et al. | Oct 2003 | A1 |
20030229900 | Reisman | Dec 2003 | A1 |
20040004630 | Kalva et al. | Jan 2004 | A1 |
20040006696 | Shin et al. | Jan 2004 | A1 |
20040024588 | Watson et al. | Feb 2004 | A1 |
20040031058 | Reisman | Feb 2004 | A1 |
20040037271 | Liscano et al. | Feb 2004 | A1 |
20040038692 | Muzaffar | Feb 2004 | A1 |
20040059918 | Xu | Mar 2004 | A1 |
20040059933 | Levy | Mar 2004 | A1 |
20040064319 | Neuhauser et al. | Apr 2004 | A1 |
20040073916 | Petrovic et al. | Apr 2004 | A1 |
20040073951 | Bae et al. | Apr 2004 | A1 |
20040102961 | Jensen et al. | May 2004 | A1 |
20040120417 | Lynch et al. | Jun 2004 | A1 |
20040125125 | Levy | Jul 2004 | A1 |
20040128514 | Rhoads | Jul 2004 | A1 |
20040137929 | Jones et al. | Jul 2004 | A1 |
20040143844 | Brant et al. | Jul 2004 | A1 |
20040146161 | De Jong | Jul 2004 | A1 |
20040163020 | Sidman | Aug 2004 | A1 |
20040184369 | Herre et al. | Sep 2004 | A1 |
20040199387 | Wang et al. | Oct 2004 | A1 |
20040267533 | Hannigan et al. | Dec 2004 | A1 |
20050028189 | Heine et al. | Feb 2005 | A1 |
20050033758 | Baxter | Feb 2005 | A1 |
20050036653 | Brundage et al. | Feb 2005 | A1 |
20050058319 | Rhoads et al. | Mar 2005 | A1 |
20050086682 | Burges et al. | Apr 2005 | A1 |
20050144004 | Bennett et al. | Jun 2005 | A1 |
20050192933 | Rhoads et al. | Sep 2005 | A1 |
20050216346 | Kusumoto et al. | Sep 2005 | A1 |
20050234728 | Tachibana et al. | Oct 2005 | A1 |
20050234731 | Sirivara et al. | Oct 2005 | A1 |
20050234774 | Dupree | Oct 2005 | A1 |
20050262351 | Levy | Nov 2005 | A1 |
20050271246 | Sharma et al. | Dec 2005 | A1 |
20060059277 | Zito et al. | Mar 2006 | A1 |
20060083403 | Zhang et al. | Apr 2006 | A1 |
20060089912 | Spagna et al. | Apr 2006 | A1 |
20060095401 | Krikorian et al. | May 2006 | A1 |
20060107195 | Ramaswamy et al. | May 2006 | A1 |
20060107302 | Zdepski | May 2006 | A1 |
20060110005 | Tapson | May 2006 | A1 |
20060136564 | Ambrose | Jun 2006 | A1 |
20060167747 | Goodman et al. | Jul 2006 | A1 |
20060168613 | Wood et al. | Jul 2006 | A1 |
20060212710 | Baum et al. | Sep 2006 | A1 |
20060221173 | Duncan | Oct 2006 | A1 |
20060224798 | Klein et al. | Oct 2006 | A1 |
20060280246 | Alattar et al. | Dec 2006 | A1 |
20070006250 | Croy et al. | Jan 2007 | A1 |
20070016918 | Alcorn et al. | Jan 2007 | A1 |
20070055987 | Lu et al. | Mar 2007 | A1 |
20070061577 | Van De Kerkhof et al. | Mar 2007 | A1 |
20070064937 | Van Leest et al. | Mar 2007 | A1 |
20070070429 | Hein et al. | Mar 2007 | A1 |
20070104335 | Shi et al. | May 2007 | A1 |
20070110089 | Essafi et al. | May 2007 | A1 |
20070118375 | Kenyon et al. | May 2007 | A1 |
20070124771 | Shvadron | May 2007 | A1 |
20070127717 | Herre et al. | Jun 2007 | A1 |
20070129952 | Kenyon et al. | Jun 2007 | A1 |
20070143778 | Covell et al. | Jun 2007 | A1 |
20070149114 | Danilenko | Jun 2007 | A1 |
20070162927 | Ramaswamy et al. | Jul 2007 | A1 |
20070198738 | Angiolillo et al. | Aug 2007 | A1 |
20070201835 | Rhoads | Aug 2007 | A1 |
20070226760 | Neuhauser et al. | Sep 2007 | A1 |
20070240234 | Watson | Oct 2007 | A1 |
20070242826 | Rassool | Oct 2007 | A1 |
20070250716 | Brunk et al. | Oct 2007 | A1 |
20070274523 | Rhoads | Nov 2007 | A1 |
20070276925 | La Joie et al. | Nov 2007 | A1 |
20070276926 | La Joie et al. | Nov 2007 | A1 |
20070288476 | Flanagan, III et al. | Dec 2007 | A1 |
20070294057 | Crystal et al. | Dec 2007 | A1 |
20070294132 | Zhang et al. | Dec 2007 | A1 |
20070294705 | Gopalakrishnan et al. | Dec 2007 | A1 |
20070294706 | Neuhauser et al. | Dec 2007 | A1 |
20080019560 | Rhoads | Jan 2008 | A1 |
20080022114 | Moskowitz | Jan 2008 | A1 |
20080027734 | Zhao et al. | Jan 2008 | A1 |
20080028223 | Rhoads | Jan 2008 | A1 |
20080028474 | Horne et al. | Jan 2008 | A1 |
20080040354 | Ray et al. | Feb 2008 | A1 |
20080059160 | Saunders et al. | Mar 2008 | A1 |
20080065507 | Morrison et al. | Mar 2008 | A1 |
20080077956 | Morrison et al. | Mar 2008 | A1 |
20080082510 | Wang et al. | Apr 2008 | A1 |
20080082922 | Biniak et al. | Apr 2008 | A1 |
20080083003 | Biniak et al. | Apr 2008 | A1 |
20080133223 | Son et al. | Jun 2008 | A1 |
20080137749 | Tian et al. | Jun 2008 | A1 |
20080139182 | Levy et al. | Jun 2008 | A1 |
20080140573 | Levy et al. | Jun 2008 | A1 |
20080168503 | Sparrell | Jul 2008 | A1 |
20080209219 | Rhein | Aug 2008 | A1 |
20080209491 | Hasek | Aug 2008 | A1 |
20080215333 | Tewfik et al. | Sep 2008 | A1 |
20080219496 | Tewfik et al. | Sep 2008 | A1 |
20080235077 | Harkness et al. | Sep 2008 | A1 |
20090007159 | Rangarajan et al. | Jan 2009 | A1 |
20090031134 | Levy | Jan 2009 | A1 |
20090070408 | White | Mar 2009 | A1 |
20090070587 | Srinivasan | Mar 2009 | A1 |
20090119723 | Tinsman | May 2009 | A1 |
20090125310 | Lee et al. | May 2009 | A1 |
20090150553 | Collart et al. | Jun 2009 | A1 |
20090193052 | FitzGerald et al. | Jul 2009 | A1 |
20090259325 | Topchy et al. | Oct 2009 | A1 |
20090265214 | Jobs et al. | Oct 2009 | A1 |
20090307061 | Monighetti et al. | Dec 2009 | A1 |
20090307084 | Monighetti et al. | Dec 2009 | A1 |
20100008500 | Lisanke et al. | Jan 2010 | A1 |
20100135638 | Mia | Jun 2010 | A1 |
20100166120 | Baum et al. | Jul 2010 | A1 |
20100223062 | Srinivasan et al. | Sep 2010 | A1 |
20100268573 | Jain et al. | Oct 2010 | A1 |
20110224992 | Chaoui et al. | Sep 2011 | A1 |
20120239407 | Lynch et al. | Sep 2012 | A1 |
20120278651 | Muralimanohar et al. | Nov 2012 | A1 |
20130096706 | Srinivasan et al. | Apr 2013 | A1 |
20130144631 | Miyasaka et al. | Jun 2013 | A1 |
20140026159 | Cuttner | Jan 2014 | A1 |
20140189724 | Harkness et al. | Jul 2014 | A1 |
20140226814 | Fernando | Aug 2014 | A1 |
20150039321 | Neuhauser et al. | Feb 2015 | A1 |
20150039322 | Lynch et al. | Feb 2015 | A1 |
20150039972 | Lynch et al. | Feb 2015 | A1 |
20150221312 | Lynch et al. | Aug 2015 | A1 |
Number | Date | Country |
---|---|---|
8976601 | Feb 2002 | AU |
9298201 | Apr 2002 | AU |
2003230993 | Nov 2003 | AU |
2006203639 | Sep 2006 | AU |
0112901 | Jun 2003 | BR |
0309598 | Feb 2005 | BR |
2483104 | Nov 2003 | CA |
1149366 | May 1997 | CN |
1372682 | Oct 2002 | CN |
1592906 | Mar 2005 | CN |
1647160 | Jul 2005 | CN |
101243688 | Aug 2008 | CN |
0769749 | Apr 1997 | EP |
1267572 | Dec 2002 | EP |
1349370 | Oct 2003 | EP |
1406403 | Apr 2004 | EP |
1307833 | Jun 2006 | EP |
1703460 | Sep 2006 | EP |
1745464 | Oct 2007 | EP |
1704695 | Feb 2008 | EP |
1504445 | Aug 2008 | EP |
2487858 | Aug 2012 | EP |
2001040322 | Aug 2002 | JP |
2002247610 | Aug 2002 | JP |
2003208187 | Jul 2003 | JP |
2003536113 | Dec 2003 | JP |
2006154851 | Jun 2006 | JP |
2007318745 | Dec 2007 | JP |
9527349 | Oct 1995 | WO |
9702672 | Jan 1997 | WO |
0004662 | Jan 2000 | WO |
0019699 | Apr 2000 | WO |
0119088 | Mar 2001 | WO |
0124027 | Apr 2001 | WO |
0131497 | May 2001 | WO |
0140963 | Jun 2001 | WO |
0153922 | Jul 2001 | WO |
0175743 | Oct 2001 | WO |
0191109 | Nov 2001 | WO |
0205517 | Jan 2002 | WO |
0211123 | Feb 2002 | WO |
0215081 | Feb 2002 | WO |
0217591 | Feb 2002 | WO |
0219625 | Mar 2002 | WO |
0227600 | Apr 2002 | WO |
0237381 | May 2002 | WO |
0245034 | Jun 2002 | WO |
02061652 | Aug 2002 | WO |
02065305 | Aug 2002 | WO |
02065318 | Aug 2002 | WO |
02069121 | Sep 2002 | WO |
03009277 | Jan 2003 | WO |
03091990 | Nov 2003 | WO |
03094499 | Nov 2003 | WO |
03096337 | Nov 2003 | WO |
2004010352 | Jan 2004 | WO |
2004040416 | May 2004 | WO |
2004040475 | May 2004 | WO |
2005025217 | Mar 2005 | WO |
2005064885 | Jul 2005 | WO |
2005101243 | Oct 2005 | WO |
2005111998 | Nov 2005 | WO |
2006012241 | Feb 2006 | WO |
2006025797 | Mar 2006 | WO |
2007056531 | May 2007 | WO |
2007056532 | May 2007 | WO |
2008042953 | Apr 2008 | WO |
2008044664 | Apr 2008 | WO |
2008045950 | Apr 2008 | WO |
2008110002 | Sep 2008 | WO |
2008110790 | Sep 2008 | WO |
2009011206 | Jan 2009 | WO |
2009061651 | May 2009 | WO |
2009064561 | May 2009 | WO |
2010048458 | Apr 2010 | WO |
Entry |
---|
European Patent Office, “Examination Report,” issued in connection with application No. EP 09 748 892.8-1908, on Dec. 21, 2015 (7 pages). |
Patent Cooperation Treaty, “International Search Report and Written Opinion,” issued in connection with Application No. PCT/US2014/049202, Nov. 12, 2014, 9 pages. |
Fink et al., “Social- and Interactive-Television Applications Based on Real-Time Ambient-Audio Identification,” EuroITV, 2006 (10 pages). |
United States Patent and Trademark Office, “Non-Final Office Action,” issued in connection with U.S. Appl. No. 12/464,811, Aug. 31, 2012, 46 pages. |
United States Patent and Trademark Office, “Non-Final Office Action,” issued in connection with U.S. Appl. No. 12/464,811, Apr. 1, 2013, 23 pages. |
United States Patent and Trademark Office, “Final Office Action,” issued in connection with U.S. Appl. No. 12/464,811, Dec. 17, 2013, 41 pages. |
United States Patent and Trademark Office, “Non-Final Office Action,” issued in connection with U.S. Appl. No. 13/955,438, May 22, 2015, 27 pages. |
Arbitron, “Critical Band Encoding Technology Audio Encoding System From Arbitron ,” Feb. 2008, 27 pages. |
United States Patent and Trademark Office, “Non-Final Office Action,” issued in connection with U.S. Appl. No. 14/023,226, Jul. 31, 2014, 15 pages. |
United States Patent and Trademark Office, “Notice of Allowance,” issued in connection with U.S. Appl. No. 14/023,226, Dec. 23, 2014, 23 pages. |
United States Patent and Trademark Office, “Non-Final Office Action,” issued in connection with U.S. Appl. No. 14/023,221, Jun. 29, 2015, 66 pages. |
United States Patent and Trademark Office, “Final Office Action,” issued in connection with U.S. Appl. No. 14/023,221, mailed Dec. 3, 2015, 11 pages. |
United States Patent and Trademark Office, “Notice of Allowance,” issued in connection with U.S. Appl. No. 14/685,984, mailed Dec. 18, 2015, 5 pages. |
Patent Cooperation Treaty, “International preliminary report on patentability,” issued in connection with Application No. PCT/US2014/049202, mailed on Feb. 11, 2016, 5 pages. |
Chinese Patent Office, “Certificate of Grant”, issued in connection with Chinese Application No. 12105179.8, mailed on Mar. 18, 2016, 3 pages. |
United States Patent and Trademark Office, “Final Office Action,” issued in connection with U.S. Appl. No. 12/464,811, Mar. 10, 2016, 14 pages. |
U.S. Appl. No. 14/023,221, filed Sep. 10, 2013 (44 pages). |
United States Patent and Trademark Office, “Non-Final Office Action,” issued in connection with U.S. Appl. No. 14/023,221, mailed Aug. 31, 2016 (13 pages). |
United States Patent and Trademark Office, “Non-Final Office Action,” issued in connection with U.S. Appl. No. 13/955,438 mailed Sep. 9, 2016 (13 pages). |
United States Patent and Trademark Office, “Final Office Action,” issued in connection with U.S. Appl. No. 13/955,438 mailed Dec. 4, 2015 (12 pages). |
United States Patent and Trademark Office, “Corrected Non-Final Office Action,” issued in connection with U.S. Appl. No. 13/955,438 mailed Jun. 2, 2015 (14 pages). |
United States Patent and Trademark Office, “Non-Final Office Action,” issued in connection with U.S. Appl. No. 14/685,984 mailed Jul. 31, 2015 (5 pages). |
United States Patent and Trademark Office, “Notice of Allowance,” issued in connection with U.S. Appl. No. 12/464,811, mailed Sep. 23, 2016 (8 pages). |
United States Patent and Trademark Office, “Non-Final Office Action,” issued in connection with U.S. Appl. No. 12/464,811, Jul. 6, 2015, 52 pages. |
Number | Date | Country | |
---|---|---|---|
20150039320 A1 | Feb 2015 | US |