Content identifiers triggering corresponding responses through collaborative processing

Information

  • Patent Grant
  • 7302574
  • Patent Number
    7,302,574
  • Date Filed
    Thursday, June 21, 2001
    23 years ago
  • Date Issued
    Tuesday, November 27, 2007
    17 years ago
Abstract
Fingerprint data derived from audio or other content is used as an identifier. The fingerprint data can be derived from the content. In one embodiment, fingerprint data supplied from two or more sources is aggregated. The aggregated fingerprint data is used to define a set of audio signals. An audio signal from the set of audio signals is selected based on its probability of matching the fingerprint data. Digital watermarks can also be similarly used to define a set of audio signals.
Description
FIELD OF THE INVENTION

The present invention relates to computer-based systems, and more particularly relates to systems that identify electronic or physical objects (e.g., audio, printed documents, video, etc.), and trigger corresponding responses.


BACKGROUND

In application Ser. No. 09/571,422 (now laid-open as PCT publication WO 00/70585), the present assignee described technology that can sense an object identifier from a physical or electronic object, and trigger a corresponding computer response.


In applications Ser. Nos. 09/574,726 and 09/476,686, the present assignee described technology that uses a microphone to sense audio sounds, determine an identifier corresponding to the audio, and then trigger a corresponding response.







DETAILED DESCRIPTION

Although the cited patent applications focused on use of digital watermarks to identify the subject objects/audio, they noted that the same applications and benefits can be provided with other identification technologies.


One such suitable technology—variously known as robust hashing, fingerprinting, etc.—involves generating an identifier from attributes of the content. This identifier can then be looked-up in a database (or other data structure) to determine the song (or other audio track) to which it corresponds.


Various fingerprinting technologies are known. For example, a software program called TRM, from Relatable Software, was written up in the Washington Post as follows:

    • TRM performs a small technological miracle: It “fingerprints” songs, analyzing beat and tempo to generate a unique digital identifier. Since every song is slightly different, no two “acoustic fingerprints” are alike, not even live and studio versions of the same melody.


Tuneprint is another such audio fingerprinting tool. Tuneprint is understood to utilize a model of human hearing used to predict how audio will appear after it's been distorted by the human ear, and the parts of neural processing that are understood. This is some of the same information that led to MP3 encoders achieving exceptional audio compression. Characteristics that uniquely identify the track are then identified by picking out the most important, surprising, or significant features of the sound.


Yet another fingerprinting program is Songprint, available as an open source library from freetantrum.org.


Still other fingerprinting technologies are available from Cantametrix (see, e.g., published patent applications WO01/20483 and WO01/20609).


One particular approach to fingerprinting is detailed in the present assignee's application Ser. No. 60/263,490, filed Jan. 22, 2001.


One form of fingerprint may be derived by applying content—in whole or part, and represented in time- or frequency format—to a neural network, such as a Kohonen self-organizing map. For example, a song may be identified by feeding the first 30 seconds of audio, with 20 millisecond Fourier transformed windows, into a Kohonen network having 64 outputs. The 64 outputs can, themselves, form the fingerprint, or they can be further processed to yield the fingerprint.


A variety of other fingerprinting tools and techniques are known to artisans in the field. Others are disclosed, e.g., in applications Ser. Nos. 60/257,822, 09/563,664, and 09/578,551. See also the chapter on Fingerprinting by John Hyeon Lee, in Information Hiding: Techniques for Stenography and Digital Watermarking edited by Stefan Katzenbeisse and Fabien A. P. Petitcolas, published by Artech House.


One way to generate a fingerprint is to “hash” the audio, to derive a shorter code that is dependent, in a predetermined way, on the audio data. However, slight differences in the audio data (such as sampling rate) can cause two versions of the same song to yield two different hash codes. While this outcome is advantageous in certain outcomes, it is disadvantageous in many others.


Generally preferable are audio fingerprinting techniques that yield the same fingerprints, even if the audio data are slightly different. Thus, a song sampled at a 96K bit rate desirably should yield the same fingerprint as the same song sampled at 128K. Likewise, a song embedded with steganographic watermark data should generally yield the same fingerprint as the same song without embedded watermark data.


One way to do this is to employ a hash function that is insensitive to certain changes in the input data. Thus, two audio tracks that are acoustically similar will hash to the same code, notwithstanding the fact that individual bits are different. A variety of such hashing techniques are known.


Another approach does not rely on “hashing” of the audio data bits. Instead, the audio is decomposed into elements having greater or lesser perceptibility. Audio compression techniques employ such decomposition methods, and discard the elements that are essentially imperceptible. In fingerprinting, these elements can also be disregarded, and the “fingerprint” taken from the acoustically significant portions of the audio (e.g., the most significant coefficients after transformation of the audio into a transform domain, such as DCT).


Some fingerprinting techniques do not rely on the absolute audio data (or transformed data) per se, but rather rely on the changes in such data from sample to sample (or coefficient to coefficient) as an identifying hallmark of the audio.


Some fingerprinting algorithms consider the entire audio track (e.g., 3 minutes). Others work on much shorter windows—a few seconds, or fractions of seconds. The former technique yields a single fingerprint for the track. The latter yields plural fingerprints—one from each excerpt. (The latter fingerprints can be concatenated, or otherwise combined, to yield a master fingerprint for the entire audio track.) For compressed audio, one convenient unit from which excerpts can be formed is the frame or window used in the compression algorithm (e.g., the excerpt can be one frame, five frames, etc.).


One advantage to the excerpt-based techniques is that a song can be correctly identified even if it is truncated. Moreover, the technique is well suited for use with streaming media (in which the entire song data is typically not available all at once as a single file).


In database look-up systems employing fingerprints from short excerpts, a first fingerprint may be found to match 10 songs. To resolve this ambiguity, subsequent excerpt-fingerprints can be checked.


One way of making fingerprints “robust” against variations among similar tracks is to employ probabilistic methods using excerpt-based fingerprints. Consider the following, over-simplified, example:
















Fingerprinted excerpt
Matches these songs in database









Fingerprint 1
A, B, C



Fingerprint 2
C, D, E



Fingerprint 3
B, D, F



Fingerprint 4
B, F, G










This yields a “vote” tally as follows:
























Matches to
A
B
C
D
E
F
G







# Hits
1
3
2
2
1
2
1










In this situation, it appears most probable that the fingerprints correspond to song B, since three of the four excerpt-fingerprints support such a conclusion. (Note that he excerpts—that which yielded Fingerprint 2—does not match song B at all.)


More sophisticated probabilistic techniques, of course, can be used.


Once a song has been identified in a database, a number of different responses can be triggered. One is to impose a set of usage controls corresponding to terms set by the copyright holder (e.g., play control limitations, record control, fee charges, etc.) Another is to identify metadata related to the song, and provide the metadata to a user (or a link the metadata). In some such applications, the song is simply identified by title and artist, and this information is returned to the user, e.g., by email, instant messaging, etc. With this information, the user can be given an option to purchase the music in CD or electronic form, purchase related materials (t-shirts, concert tickets), etc. A great variety of other content-triggered actions are disclosed in the cited applications.


One of the advantages of fingerprint-based content identification systems is that not require any alteration to the content. Thus, recordings made 50 years ago can be fingerprinted, and identified through such techniques.


Going forward, there are various advantages to encoding the content with the fingerprint. Thus, for example, a fingerprint identifier derived from a song can be stored in a file header of a file containing that song. (MP3 files, MPEG files, and most other content file formats include header fields in which such information can readily be stored.) The fingerprint can then obtained in two different ways—by reading the info, and by computation from the audio information. This redundancy offers several advantages. One aids security. If a file has a header-stored fingerprint that does not match a fingerprint derived from the file contents, something is amiss—the file may destructive (e.g., a bomb or virus), or the file structure may misidentify the file contents.


In some embodiments, the fingerprint data (or watermark data) stored in the header may be encrypted, and/or authenticated by a digital signature such as a complete hash, or a few check bits or CRC bits. In such cases, the header data can be the primary source of the fingerprint (watermark) information, with the file contents being processed to re-derive the fingerprint (watermark) only if authentication of the fingerprint stored in the header fails. Instead of including the fingerprint in the header, the header can include an electronic address or pointer data indicating another location (e.g., a URL or database record) at which the fingerprint data is stored. Again, this information may be secured using known techniques.


Similarly, the fingerprint can point to a database that contains one or more IDs that are added via a watermark. This is useful when CDs are being converted to MP3 files (i.e. ripped) and the fingerprint is calculated from a hash of the table of contents (TOC) such as done with CDDB.com, or from all of the songs. In this case, the database entry for that fingerprint could include a list of IDs for each song, and these IDs are added via a watermark and/or frame header data. This can also be useful where the content is identified based upon a group of fingerprints from plural excerpts, in which case the database that determines the content also contains an identifier, unrelated to the fingerprint(s) for that piece of content that can be embedded via a watermark.


Instead of, or in addition to, storing a fingerprint in a file header, the fingerprint data may be steganographically encoded into the file contents itself, using known watermarking techniques (e.g., those disclosed in application Ser. No. 09/503,881, and U.S. Pat. Nos. 6,061,793, 6,005,501 and 5,940,135). For example, the fingerprint ID can be duplicated in the data embedded via a watermark.


In some arrangements, a watermark can convey a fingerprint, and auxiliary data as well. The file header can also convey the fingerprint, and the auxiliary data. And even if the file contents are separated from the header, and the watermark is corrupted or otherwise lost, the fingerprint can still be recovered from the content. In some cases, the lost auxiliary data can alternatively be obtained from information in a database record identified by the fingerprint (e.g., the auxiliary information can be literally stored in the record, or the record can point to another source where the information is stored).


Instead of especially processing a content file for the purpose of encoding fingerprint data, this action can be done automatically each time certain applications process the content for other purposes. For example, a rendering application (such as an MP3 player or MPEG viewer), a compression program, an operating system file management program, or other-purposed software, can calculate the fingerprint from the content, and encode the content with that information (e.g., using header data, or digital watermarking). It does this while the file is being processed for another purpose, e.g., taking advantage of the file's copying into a processing system's RAM memory, from slower storage.


In formats in which content is segregated into portions, such as MP3 frames, a fingerprint can be calculated for, and encoded in association with, each portion. Such fingerprints can later be crosschecked against fingerprint data calculated from the content information, e.g., to confirm delivery of paid-for content. Such fingerprints may be encrypted and locked to the content, as contemplated in application Ser. No. 09/620,019.


In addition, in this frame based systems, the fingerprint data and/or watermark data can be embedded with some or all data throughout each frames. This way a streaming system can use the header to first check the song for identification, and if that identification is absent or not authenticated, the system can check for the watermark and/or calculate the fingerprint. This improves the efficiency and cost of the detecting system.


Before being encrypted and digitally signed, the data in the frame header can be modified by the content, possibly a hash of the content or a few critical bits of content. Thus, the frame header data cannot be transferred between content. When reading the data, it must be modified by the inverse transform of the earlier modification. This system can be applied whether the data is embedded throughout each frame or all in a global file header and is discussed in application Ser. No. 09/404,291 entitled “Method And Apparatus For Robust Embedded Data” by Ken Levy on Sep. 23, 1999. Reading this secure header data is only slightly more complex than without the modification, such that the system is more efficient than always having to calculate the fingerprint and/or detect the watermark.


COLLABORATION

In some situations, content may be processed by plural users, at about the same time, to generate corresponding identifiers. This may occur, for example, where the content is a song or advertisement broadcast over the radio. Many listeners in a metropolitan area may process audio from the same song broadcast over the radio, e.g., to learn the artist or song title, to engage in some related e-commerce activity, or for another purpose (such as the other purposes identified in the cited applications).


In such cases it may be desirable to employ collaboration between such users, e.g., to assure more accurate results, to reduce the processing burden, etc.


In one embodiment, each user generates several different fingerprints from the content (such as those identified in the table, above). These fingerprints may be aggregated with other fingerprints submitted from other users within a given time window (e.g., within the past twenty seconds, or within the past fifteen and next five seconds). Since more data is being considered, the “correct” match may more likely stand out from spurious, incorrect matches.


Consider Users 1 and 2, whose content yields fingerprints giving the following matches (User 1 is unchanged from the earlier example):
















Fingerprinted excerpt
Matches these songs in database









User 1, Fingerprint N
A, B, C



User 1, Fingerprint N + 1
C, D, E



User 1, Fingerprint N + 2
B, D, F



User 1, Fingerprint N + 3
B, F, G



User 2, Fingerprint M
A, B, E



User 2, Fingerprint M + 1
H, I, A



User 2, Fingerprint M + 2
X, Y, Z










Aggregating the fingerprints from the two users results in an enhanced vote tally in which song B is the evident correct choice—with a higher probability of certainty than in the example earlier given involving a single user:


























Matches to
A
B
C
D
E
F
G
H
I
X
Y
Z





# Hits
2
4
2
2
2
2
1
1
1
1
1
1









Moreover, note that User 2's results are wholly ambiguous—no song received more than a single candidate match. Only when augmented by consideration of fingerprints from User 1 can a determination for User 2 be made. This collaboration aids the situation where several users are listening to the same content. If two users are listening to different content, it is highly probable that the fingerprints of the two users will be uncorrelated. No benefit arises in this situation, but the collaboration does not work an impairment, either. (In identifying the song for User 1, the system would only check the candidates for whom User 1 voted. Thus, if the above table showed 5 votes for a song J, that large vote count would not be considered in identifying the song for User 1, since none of the fingerprints from User 1 corresponded to that song.)


It will be recognized that the different fingerprints obtained by different users from the same song may be due to a myriad of different factors, such as ambient noise, radio multipath reception, different start times for audio capture, etc.


In the example just given, the number of fingerprints computed for each user can be reduced when compared with non-collaborative approaches, while still providing enhanced confidence in the final song determination.


Another collaborative embodiment employs a reference system. Consider again the example of radio broadcasts in a metropolitan area. Reference receivers can be installed that continuously receive audio from each of several different radio stations. Instead of relying on sound picked up by a microphone from an ambient setting, the reference receivers can generate fingerprint data from the audio in electronic form (e.g., the fingerprint-generation system can be wired to the audio output of the receiver). Without the distortion inherent in rendering through a loudspeaker, sensing through a microphone, and ambient noise effects, more accurate fingerprints may be obtained.


The reference fingerprints can be applied to the database to identify—in essentially real-time and with a high degree of certainty—the songs (or other audio signals) being broadcast by each station. The database can include a set of fingerprints associated with the song. Alternatively, the reference receiver can generate fingerprints corresponding to the identified song.


Consumers listen to audio, and fingerprints are generated therefrom, as before. However, instead of applying the consumer-audio fingerprints to the database (which may involve matching to one of hundreds of thousands of possible songs), the consumer fingerprints are instead compared to the fingerprints generated by the reference receivers (or songs determined there from). The number of such reference fingerprints will be relatively low, related to the number of broadcast stations being monitored. If a consumer-audio fingerprint correlates well with one of the reference fingerprints, then the song corresponding to that reference fingerprint is identified as the song to which the consumer is listening. If the consumer-audio fingerprint does not correlate well with any of the reference fingerprints, then the system may determine that the audio heard by the consumer is not in the subset monitored by the reference receivers, and the consumer-audio fingerprints can thereafter be processed against the full fingerprint database, as earlier described.


The system just described is well suited for applications in which the geographical location of the consumer is known, or can be inferred. For example, if the consumer device that is listening to the audio is a cell phone, and the cellular wireless infrastructure is used to relay data with the phone, the cell system can determine whether the geographical location of the listener (e.g., by area code, cell site, etc.). (Use of such cell-system data to help geographically locate the user can be employed advantageously in several such song-identification systems.).


Even if the consumer's location cannot be determined, the number of songs playing on radio stations nationwide is still a small subset of the total number of possible songs. So a nationwide system, with monitoring stations in many metropolitan areas, can be used to advantage.


As an optional enhancement to such a collaborative system, broadcast signals (e.g., audio signals) are digitally watermarked. The digital watermark preferably contains plural-bit data, which is used to identify the audio signal (e.g., a set of audio fingerprints from the audio signal, song title, copyright, album, artist, and/or record label, etc., etc.). The plural-bit data can either directly or indirectly identify the audio signal. In the indirect case, the plural-bit data includes a unique identifier, which can be used to interrogate a database. The database preferably includes some or all of the identifying information mentioned above. A reference receiver decodes an embedded digital watermark from a received audio signal. The unique identifier is used to interrogate the database to identify a fingerprint or a set of fingerprints associated with the particular audio signal. In some cases, the set includes one fingerprint; in other cases, the set includes a plurality of fingerprints. On the user side, fingerprints are generated and relayed to the reference receiver (or associated interface). The user's fingerprints are then compared against the reference fingerprints, as discussed above in the earlier embodiments.


The foregoing are just exemplary implementations of the present invention. It will be recognized that there are a great number of variations on these basic themes. The foregoing illustrates but a few applications of the detailed technology. There are many others.


To provide a comprehensive disclosure without unduly lengthening this specification, applicants incorporate by reference the patents and patent applications cited above. It is applicant's express intention to teach that the methods detailed herein are applicable in connection with the technologies and applications detailed in these cited patents and applications.


Although the foregoing specification has focused on audio applications, it will be recognized that the same principles are likewise applicable with other forms of content, including still imagery, motion pictures, video, etc. References to “songs” are illustrative only, and are not intended to limit the present invention. The inventive methods and systems could also be applied other audio, image, video signals as well. Also, for example, Digimarc MediaBridge linking from objects to corresponding internet resources can be based on identifiers derived from captured image data or the like, rather than from embedded watermarks. As such, the technique is applicable to images and video.

Claims
  • 1. A method comprising: aggregating first fingerprint data and second fingerprint data, wherein fingerprint data comprises at least a reduced-bit representation of content, and wherein the first fingerprint data originated at a first source and the second fingerprint data originated at second source, and wherein the first source and the second source are remotely located;identifying information associated with the first fingerprint data and the second fingerprint data; anddetermining a subset of the associated information.
  • 2. The method according to claim 1, wherein said determining is based at least in part on a frequency occurrence of the subset, and wherein the frequency occurrence comprises a vote tally.
  • 3. The method according to claim 1, wherein said determining is based at least in part on a frequency occurrence of the subset, and wherein the subset comprises at least one of audio, video, or image data.
  • 4. The method according to claim 3, wherein the associated information comprises at least one of audio, video or image data.
  • 5. The method of claim 1, wherein said aggregating comprises aggregating fingerprint data within a predetermined time period.
  • 6. The method according to claim 1, wherein the first fingerprint data comprises a first set of audio fingerprints, and wherein the second fingerprint data comprises a second set of audio fingerprints.
  • 7. A method to match a song based on an audio fingerprint, said method comprising: aggregating a first set of audio fingerprints provided by a first device with a second set of audio fingerprints provided by a remotely located second device;determining a plurality of songs relating to the aggregated fingerprints; andselecting a song from the plurality of songs based on a number of times a selected song matches the aggregated fingerprints.
  • 8. The method according to claim 7, wherein the selected song includes the highest number of matches.
  • 9. A method comprising: receiving a signal from a first broadcast source at a reference receiver;generating first fingerprint data from the received signal;applying the first fingerprint data to a database to select associated information;receiving second fingerprint data; andcomparing the second fingerprint data with the associated information.
  • 10. The method according to claim 9, wherein said comparing comprises selecting a subset from the associated information based on a vote tally.
  • 11. A method comprising: receiving a signal from a first broadcast source at a reference receiver;generating first fingerprint data from the received signal;applying the first fingerprint data to a database to select associated information;receiving second fingerprint data; andcomparing the second fingerprint data with the associated information, wherein said comparing comprises selecting a subset from the associated information based on a vote tally, and wherein the vote tally includes probabilities of a match with the second fingerprint data, and wherein the selected subset has a highest probability of a match.
  • 12. A method comprising: receiving a signal from a first broadcast source at a reference receiver;generating first fingerprint data from the received signal;applying the first fingerprint data to a database to select associated information;receiving second fingerprint data; andcomparing the second fingerprint data with the associated information, wherein a user device generates the second fingerprint data.
  • 13. A method comprising: receiving a signal from a first broadcast source at a reference receiver;generating first fingerprint data from the received signal;applying the first fingerprint data to a database to select associated information;receiving second fingerprint data, wherein a cell phone generates the second fingerprint data; andcomparing the second fingerprint data with the associated information.
  • 14. A method comprising: receiving a signal from a first broadcast source at a reference receiver;generating first fingerprint data from the received signal;applying the first fingerprint data to a database to select associated information;receiving second fingerprint data, wherein a user device generates the second fingerprint data;comparing the second fingerprint data with the associated information; anddetermining a geographical location of the user device.
  • 15. The method according to claim 14, wherein the user device comprises a cell phone, and wherein the geographical location of the user device is determined by at least one of area code, cell site, device identifier, repeater identifier, or alpha-numeric data.
  • 16. A method comprising: receiving a signal from a first broadcast source at a reference receiver;generating first fingerprint data from the received signal;applying the first fingerprint data to a database to select associated information;receiving second fingerprint data;comparing the second fingerprint data with the associated information;receiving a signal from a second broadcast source at the reference receiver;generating third fingerprint data from the received signal of the second broadcast source; andapplying the third fingerprint data to the database to select associated information.
  • 17. The method according to claim 16, wherein the reference receiver comprises a plurality of receivers.
  • 18. The method according to claim 17, wherein at least a first receiver of the plurality of receivers and a second receiver of the plurality of receivers are located in different geographical locations.
  • 19. The method according to claim 9, wherein when a comparison of the second fingerprint data with the associated information does not identify a subset of the associated data, said method further comprises querying a second database to determine additional associated information.
  • 20. A method comprising: receiving a signal front a first broadcast source at a reference receiver, the signal comprising an embedded digital watermark;decoding the digital watermark to obtain a plural-bit identifier;interrogating a database with the identifier to identify a set of fingerprints associated with the received signal;receiving second fingerprint data; andcomparing the second fingerprint data with the set of fingerprints.
  • 21. The method according to claim 20, wherein said comparing comprises selecting a subset from the set of fingerprints based on a vote tally.
  • 22. A method comprising: cumulating a first set of representations of audio or video with a second set of representations of audio or video, wherein the representations comprise reduced-bit representations of audio or video, and wherein the first set of representations are provided from a first device and the second set of representations are provided from a second device;determining a plurality of audio and video content relating to the cumulated sets; andselecting a set of audio or video content from the plurality of audio or video content based on a number of times a selected set of audio and video content corresponds with the cumulated sets.
  • 23. A method comprising: receiving content, wherein the content comprises an embedded digital watermark;decoding the digital watermark to obtain a plural-bit identifier;deriving a reduced-bit representation of the content;accessing a database with at least the plural-bit identifier; andusing at least the reduced-bit representation of the content to help identify or authenticate the content.
RELATED APPLICATION DATA

This application is a continuation-in-part of application Ser. No. 09/858,189, filed May 14, 2001, which is a continuation in part of application Ser. No. 09/571,422, filed May 15, 2000 now U.S. Pat. No. 6,947,571. Application Ser. No. 09/571,422 claims priority benefit to each of the following provisional applications: Ser. No. 60/141,468, filed Jun. 29, 1999; Ser. No. 60/151,586, filed Aug. 30, 1999; Ser. No 60/158,015, filed Oct. 6, 1999; Ser. No. 60/163,332, filed Nov. 3, 1999; and Ser. No. 60/164,619, filed Nov. 10, 1999. Application Ser. No. 09/571,422 is also a continuation-in-part of each of the following utility applications: Ser. No. 09/314,648, filed May 19, 1999 now U.S. Pat. No. 6,681,028; Ser. No. 09/342,688, filed Jun. 29, 1999 now U.S. Pat. No. 6,650,761; Ser. No. 09/342,689, filed Jun. 29, 1999 now U.S. Pat. No. 6,311,214; Ser. No. 09/342,971, filed Jun. 29, 1999 now abandoned; Ser. No. 09/343,101, filed Jun. 29, 1999 now abandoned; Ser. No. 09/343,104, filed Jun. 29, 1999 now abandoned; Ser. No. 09/531,076, filed Mar. 18, 2000; Ser. No. 09/543,125, filed Apr. 5, 2000; Ser. No. 09/547,664, filed Apr. 12, 2000; and Ser. No. 09/552,998, filed Apr. 19, 2000 now abandoned. This application is also a continuation-in-part of copending application Ser. Nos. 09/574,726 and 09/476,686, both of which claim priority to application Ser. No. 60/134,782. The present application claims priority benefit to the foregoing applications. The subject matter of this application is also related to that of Ser. Nos. 09/620,019, 60/257,822, 60/232,163, and 09/404,291.

US Referenced Citations (265)
Number Name Date Kind
3810156 Goldman May 1974 A
3919479 Moon et al. Nov 1975 A
4071698 Barger, Jr. et al. Jan 1978 A
4230990 Lert, Jr. et al. Oct 1980 A
4284846 Marley Aug 1981 A
4432096 Bunge Feb 1984 A
4450531 Kenyon et al. May 1984 A
4495526 Baranoff-Rossine Jan 1985 A
4499601 Matthews Feb 1985 A
4511917 Kohler et al. Apr 1985 A
4547804 Greenberg Oct 1985 A
4677466 Lert, Jr. et al. Jun 1987 A
4682370 Matthews Jul 1987 A
4697209 Kiewit et al. Sep 1987 A
4739398 Thomas et al. Apr 1988 A
4776017 Fujimoto Oct 1988 A
4807031 Broughton et al. Feb 1989 A
4843562 Kenyon et al. Jun 1989 A
4858000 Lu Aug 1989 A
4945412 Kramer Jul 1990 A
4972471 Gross Nov 1990 A
5019899 Boles et al. May 1991 A
5031228 Lu Jul 1991 A
5276629 Reynolds Jan 1994 A
5303393 Noreen et al. Apr 1994 A
5400261 Reynolds Mar 1995 A
5436653 Ellis et al. Jul 1995 A
5437050 Lamb et al. Jul 1995 A
5481294 Thomas et al. Jan 1996 A
5486686 Zdybel, Jr. et al. Jan 1996 A
5499294 Friedman Feb 1996 A
5504518 Ellis et al. Apr 1996 A
5539635 Larson, Jr. Jul 1996 A
5564073 Takahisa Oct 1996 A
5572246 Ellis et al. Nov 1996 A
5572653 DeTemple et al. Nov 1996 A
5574519 Manico et al. Nov 1996 A
5574962 Fardeau et al. Nov 1996 A
5577249 Califano Nov 1996 A
5577266 Takahisa et al. Nov 1996 A
5579124 Aijala et al. Nov 1996 A
5581658 O'Hagan et al. Dec 1996 A
5581800 Fardeau et al. Dec 1996 A
5584070 Harris et al. Dec 1996 A
5612729 Ellis et al. Mar 1997 A
5613004 Cooperman et al. Mar 1997 A
5621454 Ellis et al. Apr 1997 A
5638443 Stefik et al. Jun 1997 A
5640193 Wellner Jun 1997 A
5646997 Barton Jul 1997 A
5661787 Pocock Aug 1997 A
5663766 Sizer, II Sep 1997 A
5664018 Leighton Sep 1997 A
5671267 August et al. Sep 1997 A
5687236 Moskowitz et al. Nov 1997 A
5708478 Tognazzini Jan 1998 A
5737025 Dougherty et al. Apr 1998 A
5740244 Indeck Apr 1998 A
5751854 Saitoh et al. May 1998 A
5761606 Wolzien Jun 1998 A
5765152 Erickson Jun 1998 A
5765176 Bloomberg Jun 1998 A
5768426 Rhoads Jun 1998 A
5774452 Wolosewicz Jun 1998 A
5774666 Portuesi Jun 1998 A
5778192 Schuster et al. Jul 1998 A
5781629 Haber et al. Jul 1998 A
5781914 Stork et al. Jul 1998 A
5832119 Rhoads Nov 1998 A
5841978 Rhoads Nov 1998 A
5842162 Fineberg Nov 1998 A
5862260 Rhoads Jan 1999 A
5889868 Moskowitz et al. Mar 1999 A
5892900 Ginter et al. Apr 1999 A
5893095 Jain et al. Apr 1999 A
5901224 Hecht May 1999 A
5902353 Reber et al. May 1999 A
5903892 Hoffert et al. May 1999 A
5905248 Russell et al. May 1999 A
5905800 Moskowitz et al. May 1999 A
5918223 Blum et al. Jun 1999 A
5930369 Cox et al. Jul 1999 A
5932863 Rathus Aug 1999 A
5938727 Ikeda Aug 1999 A
5943422 Van Wie et al. Aug 1999 A
5978791 Farber et al. Nov 1999 A
5982956 Lahmi Nov 1999 A
5983176 Hoffert et al. Nov 1999 A
5986651 Reber et al. Nov 1999 A
5986692 Logan et al. Nov 1999 A
5991500 Kanota et al. Nov 1999 A
5991737 Chen Nov 1999 A
5995105 Reber et al. Nov 1999 A
6028960 Graf et al. Feb 2000 A
6037984 Isnardi et al. Mar 2000 A
6041411 Wyatt Mar 2000 A
6064764 Bhaskaran et al. May 2000 A
6081629 Browning Jun 2000 A
6081827 Reber et al. Jun 2000 A
6081830 Schindler Jun 2000 A
6084528 Beach et al. Jul 2000 A
6088455 Logan et al. Jul 2000 A
6121530 Sonoda Sep 2000 A
6122403 Rhoads Sep 2000 A
6131162 Yoshiura et al. Oct 2000 A
6138151 Reber et al. Oct 2000 A
6148407 Aucsmith Nov 2000 A
6157721 Shear et al. Dec 2000 A
6164534 Rathus et al. Dec 2000 A
6169541 Smith Jan 2001 B1
6181817 Zabih Jan 2001 B1
6185316 Buffam Feb 2001 B1
6185318 Jain et al. Feb 2001 B1
6188010 Iwamura Feb 2001 B1
6199048 Hudetz et al. Mar 2001 B1
6201879 Bender et al. Mar 2001 B1
6219787 Brewer Apr 2001 B1
6219793 Li et al. Apr 2001 B1
6226618 Downs et al. May 2001 B1
6226672 DeMartin et al. May 2001 B1
6243480 Zhao et al. Jun 2001 B1
6282362 Murphy et al. Aug 2001 B1
6286036 Rhoads Sep 2001 B1
6292092 Chow et al. Sep 2001 B1
6304523 Jones et al. Oct 2001 B1
6311214 Rhoads Oct 2001 B1
6314457 Schena et al. Nov 2001 B1
6314518 Linnartz Nov 2001 B1
6317881 Shah-Nazaroff et al. Nov 2001 B1
6321981 Ray Nov 2001 B1
6321992 Knowles et al. Nov 2001 B1
6324573 Rhoads Nov 2001 B1
6345104 Rhoads Feb 2002 B1
6386453 Russell et al. May 2002 B1
6389055 August et al. May 2002 B1
6408331 Rhoads Jun 2002 B1
6411725 Rhoads Jun 2002 B1
6415280 Farber et al. Jul 2002 B1
6433946 Ogino Aug 2002 B2
6434403 Ausems et al. Aug 2002 B1
6434561 Durst, Jr. et al. Aug 2002 B1
6439465 Bloomberg Aug 2002 B1
6466670 Tsuria et al. Oct 2002 B1
6496802 van Zoest et al. Dec 2002 B1
6504940 Omata et al. Jan 2003 B2
6505160 Levy Jan 2003 B1
6522769 Rhoads et al. Feb 2003 B1
6523175 Chan Feb 2003 B1
6526449 Philyaw et al. Feb 2003 B1
6542927 Rhoads Apr 2003 B2
6542933 Durst, Jr. et al. Apr 2003 B1
6553129 Rhoads Apr 2003 B1
6577746 Evans et al. Jun 2003 B1
6604072 Pitman et al. Aug 2003 B2
6611599 Natarajan Aug 2003 B2
6614914 Rhoads et al. Sep 2003 B1
6625295 Wolfgang et al. Sep 2003 B1
6658568 Ginter et al. Dec 2003 B1
6671407 Venkatesan et al. Dec 2003 B1
6674876 Hannigan et al. Jan 2004 B1
6674993 Tarbouriech Jan 2004 B1
6681028 Rodriguez et al. Jan 2004 B2
6697948 Rabin et al. Feb 2004 B1
6735311 Rump et al. May 2004 B1
6748360 Pitman Jun 2004 B2
6748533 Wu Jun 2004 B1
6751336 Zhao Jun 2004 B2
6754822 Zhao Jun 2004 B1
6768980 Meyer et al. Jul 2004 B1
6771885 Agnihotri Aug 2004 B1
6772124 Hoffberg et al. Aug 2004 B2
6785815 Serret-Avila et al. Aug 2004 B1
6807534 Erickson Oct 2004 B1
6829368 Meyer et al. Dec 2004 B2
6834308 Ikezoye et al. Dec 2004 B1
6850252 Hoffberg Feb 2005 B1
6856977 Adelsbach Feb 2005 B1
6931451 Logan et al. Aug 2005 B1
6941275 Swierczek Sep 2005 B1
6947571 Rhoads et al. Sep 2005 B1
6968337 Wold Nov 2005 B2
6973574 Mihcak et al. Dec 2005 B2
6973669 Daniels Dec 2005 B2
6987862 Rhoads Jan 2006 B2
6990453 Wang Jan 2006 B2
7047413 Yacobi et al. May 2006 B2
7050603 Rhoads et al. May 2006 B2
7055034 Levy May 2006 B1
7058697 Rhoads Jun 2006 B2
7127744 Levy Oct 2006 B2
20010007130 Takaragi Jul 2001 A1
20010011233 Narayanaswami Aug 2001 A1
20010026618 Van Wie et al. Oct 2001 A1
20010026629 Oki Oct 2001 A1
20010031066 Meyer et al. Oct 2001 A1
20010032312 Runje et al. Oct 2001 A1
20010044824 Hunter et al. Nov 2001 A1
20010046307 Wong Nov 2001 A1
20010055391 Jacobs Dec 2001 A1
20020010826 Takahashi et al. Jan 2002 A1
20020021805 Schumann et al. Feb 2002 A1
20020021822 Maeno Feb 2002 A1
20020023020 Kenyon et al. Feb 2002 A1
20020023148 Ritz et al. Feb 2002 A1
20020023218 Lawandy et al. Feb 2002 A1
20020028000 Conwell et al. Mar 2002 A1
20020032698 Cox Mar 2002 A1
20020032864 Rhoads Mar 2002 A1
20020037083 Weare et al. Mar 2002 A1
20020040433 Kondo Apr 2002 A1
20020044659 Ohta Apr 2002 A1
20020048224 Dygert Apr 2002 A1
20020052885 Levy May 2002 A1
20020059208 Abe May 2002 A1
20020059580 Kalker et al. May 2002 A1
20020068987 Hars Jun 2002 A1
20020069107 Werner Jun 2002 A1
20020072982 Van de Sluis Jun 2002 A1
20020072989 Van de Sluis Jun 2002 A1
20020075298 Schena et al. Jun 2002 A1
20020082731 Pitman et al. Jun 2002 A1
20020083123 Freedman et al. Jun 2002 A1
20020087885 Peled et al. Jul 2002 A1
20020088336 Stahl Jul 2002 A1
20020099555 Pitman et al. Jul 2002 A1
20020102966 Lev et al. Aug 2002 A1
20020118864 Kondo et al. Aug 2002 A1
20020126872 Brunk Sep 2002 A1
20020133499 Ward et al. Sep 2002 A1
20020138744 Schleicher et al. Sep 2002 A1
20020150165 Huizer Oct 2002 A1
20020152388 Linnartz et al. Oct 2002 A1
20020153661 Brooks et al. Oct 2002 A1
20020161741 Wang et al. Oct 2002 A1
20020168082 Razdan Nov 2002 A1
20020172394 Venkatesan et al. Nov 2002 A1
20020174431 Bowman et al. Nov 2002 A1
20020178410 Haitsma et al. Nov 2002 A1
20020184505 Mihcak et al. Dec 2002 A1
20020188840 Echizen Dec 2002 A1
20020196976 Venkatesan Dec 2002 A1
20020199106 Hayashi Dec 2002 A1
20030018709 Schrempp et al. Jan 2003 A1
20030028796 Roberts et al. Feb 2003 A1
20030037010 Schmelzer Feb 2003 A1
20030051252 Miyaoku Mar 2003 A1
20030101162 Thompson et al. May 2003 A1
20030120679 Kriechbaum et al. Jun 2003 A1
20030135623 Schrempp et al. Jul 2003 A1
20030167173 Levy et al. Sep 2003 A1
20030174861 Levy et al. Sep 2003 A1
20030197054 Eunson Oct 2003 A1
20040049540 Wood Mar 2004 A1
20040145661 Murakami et al. Jul 2004 A1
20040169892 Yoda Sep 2004 A1
20040201676 Needham Oct 2004 A1
20040223626 Honsinger et al. Nov 2004 A1
20050043018 Kawamoto Feb 2005 A1
20050044189 Ikezoye et al. Feb 2005 A1
20050058319 Rhoads et al. Mar 2005 A1
20050091268 Meyer et al. Apr 2005 A1
20050108242 Kalker et al. May 2005 A1
20050144455 Haitsma Jun 2005 A1
20050229107 Hull et al. Oct 2005 A1
20050267817 Barton et al. Dec 2005 A1
Foreign Referenced Citations (28)
Number Date Country
161512 Nov 1985 EP
493091 Jul 1992 EP
953938 Nov 1999 EP
0967803 Dec 1999 EP
1173001 Jan 2002 EP
1199878 Apr 2002 EP
11265396 Sep 1999 JP
WO9803923 Jan 1998 WO
WO9935809 Jul 1999 WO
9959275 Nov 1999 WO
WO0058940 Oct 2000 WO
WO0079709 Dec 2000 WO
WO0106703 Jan 2001 WO
WO0115021 Mar 2001 WO
WO0120483 Mar 2001 WO
WO0120609 Mar 2001 WO
WO0115021 Mar 2001 WO
WO0162004 Aug 2001 WO
WO0171517 Sep 2001 WO
WO0172030 Sep 2001 WO
0175794 Oct 2001 WO
WO0175629 Oct 2001 WO
WO0211123 Feb 2002 WO
WO0211123 Feb 2002 WO
WO0219589 Mar 2002 WO
WO0227600 Apr 2002 WO
WO0227600 Apr 2002 WO
02082271 Oct 2002 WO
Related Publications (1)
Number Date Country
20020028000 A1 Mar 2002 US
Provisional Applications (5)
Number Date Country
60164619 Nov 1999 US
60163332 Nov 1999 US
60158015 Oct 1999 US
60151586 Aug 1999 US
60141468 Jun 1999 US
Continuation in Parts (12)
Number Date Country
Parent 09858189 May 2001 US
Child 09888339 US
Parent 09571422 May 2000 US
Child 09858189 US
Parent 09552998 Apr 2000 US
Child 09571422 US
Parent 09547664 Apr 2000 US
Child 09552998 US
Parent 09543125 Apr 2000 US
Child 09547664 US
Parent 09531076 Mar 2000 US
Child 09543125 US
Parent 09342689 Jun 1999 US
Child 09531076 US
Parent 09343104 Jun 1999 US
Child 09342689 US
Parent 09343101 Jun 1999 US
Child 09343104 US
Parent 09342971 Jun 1999 US
Child 09343101 US
Parent 09342688 Jun 1999 US
Child 09342971 US
Parent 09314648 May 1999 US
Child 09342688 US