1. Field of the Disclosure
The present disclosure relates to data communications, and, in particular, to a novel system and apparatus for the automatic identification of new media.
2. The Prior Art
Background
Once an audio or video work has been recorded it may be both downloaded by users for play, or broadcast (“streamed”) over the Internet or conventional radio or television broadcast or satellite broadcast media. When works are streamed, they may be listened to or viewed by Internet users in a manner much like traditional radio and television stations. Media streams often contain both performances of pre-recorded work and extemporaneous work, such announcements or other narrative material. Furthermore, media streams may contain no information about the work being performed, or the information provided may be imprecise.
Given the widespread use of streamed media, audio works, or video works may need to be identified. The need for identification of works may arise in a variety of situations. For example, an artist may wish to verify royalty payments or generate their own Arbitron®-like ratings by identifying how often their works are being performed. Thus, playlists of media may need to be generated. Additionally, for competitive analysis a business may wish to know when and where a competitor is placing advertising in the media. Furthermore, a broadcast source may want to know when and how often a competitive broadcast source is using pre-recorded material.
Further complicating the identification are improvements in technology allowing a tremendous number of new works to be produced, such as new song recordings, new advertisements, news worthy audio clips, and station promotions. A comprehensive playlist preferably would include these new works, which may be performed over a wide variety of media streams.
If a match is made, typically the module 102 will keep a record of all matches made during a predetermined period of time. For example, the module 102 may keep a record of song titles detected during a 24-hour period.
The system 100 may further include a playlist server 110 having a processor 112 and database 114. The server 110 is typically configured to receive information such as the titles of identified songs from the one or more detection modules 102 through a network such as the Internet 109 and generate a playlist which may be stored on database 114.
However, the system 100 is typically unable to identify works for which a corresponding reference representation does not exist in the reference database.
A new media identification system is disclosed. In one aspect, a system may comprise at least one analysis module for receiving and analyzing streamed work and generating a corresponding representation thereof; at least one identification (ID) server for receiving the representation from the at least one analysis module and generating a collection of unidentifiable segments in the received work.
A method for identifying new works is also disclosed. In one aspect, a method may comprise receiving an unidentified segment; determining whether the unidentified segment is similar to previously received unidentified segments; and sequentially extending similar unidentified segments into a single super segment.
Persons of ordinary skill in the art will realize that the following description is illustrative only and not in any way limiting. Other modifications and improvements will readily suggest themselves to such skilled persons having the benefit of this disclosure.
This disclosure may relate to data communications. Various disclosed aspects may be embodied in various computer and machine readable data structures. Furthermore, it is contemplated that data structures embodying the teachings of the disclosure may be transmitted across computer and machine readable media, and through communications systems by use of standard protocols such as those used to enable the Internet and other computer networking standards.
The disclosure may relate to machine readable media on which are stored various aspects of the disclosure. It is contemplated that any media suitable for retrieving instructions is within the scope of the present disclosure. By way of example, such media may take the form of magnetic, optical, or semiconductor media.
Various aspects of the disclosure may be described through the use of flowcharts. Often, a single instance of an aspect of the present disclosure may be shown. As is appreciated by those of ordinary skill in the art, however, the protocols, processes, and procedures described herein may be repeated continuously or as often as necessary to satisfy the needs described herein. Accordingly, the representation of various aspects of the present disclosure through the use of flowcharts should not be used to limit the scope of the present disclosure.
Exemplary Structure
The analysis module 202 may also be configured to receive a media stream from one or more networked sources 206. In one aspect of a disclosed system, the input port 210 of the analysis module 202 may be configured to monitor sources providing content in standard formats such as Real®, QuickTime®, Windows Media®, MP3®, and similar formats, using hardware and software as is known in the art.
In another aspect of a disclosed system, the input port 210 may be configured to directly receive audio or video through any of the various means know in the art, such as a microphone, video acquisition system, VHS tape, or audio cassette tape. These media streams may also be provided in standard formats such as MP3, Windows Media, and similar formats. Thus, the analysis module 202 may be configured to receive a work prior to the work being presented to the broadcast system or network source. It is envisioned that this presentation could occur almost simultaneously.
The input port 210 may be operatively coupled to a network 208 through which the source 206 may be accessed. The network 208 may comprise any packet- or frame-based network known in the art, such as the Internet. The input port 210 may also be configured to access the network 208 through any means known in the art, such as through traditional copper connections. Furthermore, the input port 210 may also be configured to access the network 208 using wireless connectivity methods as known in the art, including low-power broadband methods such as Bluetooth®, or cellular-based access methods such as those used to provide wireless connectivity to cellular phones and personal digital assistants (PDAs).
The analysis module 202 may also include an output port 212 for providing connectivity to the network 208. The output port 212 may comprise a separate unit within the analysis module 202 and may include hardware and software to provide the same functionality as the input port 210. Additionally, it is contemplated that the output port 212 may comprise substantially the same circuitry as the input port 210 in order to save space and cost.
Referring now to
It is contemplated that any processor known in the art may be employed in the module 202, and the choice of a processor may depend on the application. For example, if the module 202 is embodied in a personal computer, the processor 202 may comprise a microprocessor capable of running conventional operating systems such as Microsoft Windows®, while if the module 202 is deployed in a mobile unit such as a PDA, the processor 202 may need only be capable of running an operating system such as Palm OS®, or other embedded systems such as may be present in a cell phone or other consumer device.
The module 202 may include ancillary hardware and software, such as conventional memory 304 and a conventional database 306 for the storage and retrieval of various aspects of the disclosed system and data.
The module 202 may be configured to generate a representation of received work which may then be used by the system to identify performed works contained in the received work. It is contemplated that a wide variety of methods may be used by the analysis module 202 to generate the representation. The analysis module may be configured to generate a representation of the received work using the psychoacoustic properties of the audio content of the received work. Such methods are known in the art. For example, the analysis module may generate feature vectors as disclosed in U.S. Pat. No. 5,918,223 to Blum, et al., which is assigned to the same assignee of the present disclosure and incorporated by reference as though fully set forth herein.
Additionally, the module 202 may use audio or video spectral or wavelet representation techniques as are known in the art. For example, other representation forms may comprise the text output of a speech recognition system, text output of a close captioned transmission, or a musical score produced by a music transcription system. In another embodiment, the representation may comprise a bit calculated key using any of the techniques as are known in the art such as MD5 hash and CRC.
The representation may also make note of significant changes in the content of a media signal. Changes in the media stream may also be indicated by a transition from one characteristic set of features to another. By way of example only, such changes may be indicated by a relatively quiet audio section, a change from heavy bass to heavy treble, a blank video frame, or a change in the relative amounts of color in successive segments.
It is contemplated that a wide variety of analysis methods may be employed singly or in combination advantageously in the present disclosure.
Referring back to
The ID server 220 may comprise a computer suitable for running an operating system such as Microsoft Windows®, UNIX®, LINUX®, MAC OS®, and the like. The ID server 220 may include a conventional processor 222 for operation of the server. The ID server may further include associated hardware and software known in the art such as a conventional database 224 for storing embodiments of the disclosure or data.
It is contemplated that the ID server 220 may be configured to identify received work using a variety of methods known in the art. The method for identification may correspond to the method(s) used to generate the representation within the analysis module. For example, the ID server 220 may be configured to perform identification using the methods disclosed in U.S. Pat. No. 5,918,223 to Blum, et al, if the representation were generated using corresponding methods.
Another example would be the pure spectral representations as are known in the art. It is envisioned that other representations such as wavelets may be used. The invention could also identify the received work from the speech recognized text compared against a database of song lyrics using any of a variety of methods known to those skilled in the art.
Yet another example would be any of a number of search techniques as are known in the art when the representation is a bit calculated key.
The system may also identify the received work by searching a collection of musical works for musical note sequences that correspond to the musical score in the representation.
In another configuration the system may use a combination of identification techniques, each of which correspond to a representation of the received work. By using several identification techniques, the chance of a misidentification or missed identification may be greatly reduced.
Though the analysis module and ID server are shown as being located separately, it is contemplated that they also may be co-located in a single server. For example, it is contemplated that the analysis module and ID server may each be embodied in a single board computer wherein the analysis module and ID server are housed in a single unit and operatively coupled through a common backplane.
Exemplary Operation
Additionally, one or more of the analysis modules may be configured to receive a plurality of stream sources simultaneously for analysis. It is contemplated that the analysis modules may be located and configured to receive and analyze a wide variety of content, including analog radio or video, digital streaming audio or video, VHS tape, audio cassette tape or any other media.
In act 402, the analysis module then creates a representation of the received work as shown and described above. The representation may be created by the analysis module by extracting psychoacoustic properties from the received work as described above.
In act 404, the representations created by the one or more analysis modules may be provided to an ID server. The ID server may comprise hardware and software as described above. It is contemplated that the ID server may comprise a single server, multiple servers networked at a single location, or multiple servers located at different locations.
It is contemplated that the various analysis modules may provide representations to one or more ID servers in a wide variety of manners. For example, all of the analysis modules present in a system may provide representations in real-time. Or, different analysis modules may be configured to provide representations at different intervals depending on the needs of the end user. The analysis modules may transmit representations every sixty seconds, hourly, or as often as is needed.
In some cases where network connectivity is challenging, the representations may be batched up and sent to the ID server(s) once a day or less. In particularly harsh or secretive conditions, the representations may be stored within the analysis modules until the modules could be physically retrieved and operatively coupled to an ID server at another physical location.
It is contemplated that an out-of-band event may be used to trigger the transmission of representations. For example, such a trigger may comprise the initialization of a connection to a network, or the activation of media playing software or hardware.
In act 502, the ID server identifies portions of the received work based upon the representation. This identification may be performed using the methods as described above. The identification may include such information as the song title, artist, label, or any other information as is known in the art that may be associated with the work. The identification information might contain information such as the name of the advertiser or a descriptive notation of an FCC broadcaster identification segment. The identification information might contain a narrative description of a news segment.
Once an identification of a received work is made, it is contemplated that a wide variety of further acts maybe performed. For example, the identifications made by the ID server may be used to construct or maintain a playlist database. Such a playlist may be stored on the ID server, or on a distant server. As will be appreciated by those skilled in the art, if representations are provided to the ID server in real-time (or near real-time depending on the equipment or network used), a playlist may be generated in corresponding real-time. Thus, a playlist may be generated in real-time from inputs provided from distant geographic locations or multiple sources that contains a comprehensive playlist of every identified media segment.
Additionally, the identification may be transmitted back to the analysis module which generated the representation. This may be advantageous where it is desired to generate a playlist for the particular analysis module's location or user. Thus, the ID server may be configured to provide an identification back to the source analysis module.
The identity of the received work may also be used for the maintenance of the system. Typically, copies of received works are stored on local drives for audit purposes. Since the intermediate representation files may be larger in size than the identities, it may be desirable to configure the analysis module to purge intermediate representations for identified works to recover drive space. It is contemplated that the ID server may be configured to transmit the identity of received works back to the generating analysis module, and the corresponding part of the representation may then be deleted from local drives by the analysis module, thereby recovering valuable capacity.
Furthermore, it is contemplated that the ID server or analysis module may be configured to send information regarding identified works to third parties, such as third-party servers. Additionally, the ID server or analysis module may be configured to provide an electronic notification to third parties of identifications made by the ID server. Examples of electronic notifications may include email, HTTP POST transactions, or other electronic communication as is known in the art. As is known by those skilled in the art, these electronic notifications may be used to initiate an action based on their content. For example, such notifications may allow the playlist to be accessed in real-time or as desired.
It is contemplated that the ID server may be configured to provide customized playlists containing information tailored to a customer's individual needs. For example, a customer may wish to be notified whenever a certain work is broadcast, or whether a particular work is broadcast on a particular media outlet. Customers may wish to have complete playlists provided to them periodically at desired intervals that may include statistics known in the art. By using the system as disclosed herein, such requests may be satisfied automatically in real-time, or at whatever interval may be desired. It is to be understood that any of the aspects of the present disclosure may be performed in real time or as often as desired.
Unidentified Segments
During the process described above, the received work presented to the system may contain segments which may not be identified. In an aspect of a disclosed system, such unidentified segments may be examined to provide useful information. For example, if a particular unidentified segment is repeated often it may contain a new song or commercial or other pre-recorded work that warrants further action.
In one aspect of a disclosed system, the ID server may examine the representations of unidentified segments, and determine that some sub-segments were actually repeat performances of a single work. Furthermore, this examination may extract a plurality of other characteristics of the original broadcast such as the amount of musical content, amount of speech content, a transcription based on speech recognition, the beat of any music present, etc. These characteristics of the unidentified segments may then be used to classify the unidentified received representations.
For example, a sub-segment that has been performed more than once may be correlated with a high amount of musical content and a certain minimum length of play time to indicate that a new song has been detected. Correlating other values and characteristics could indicate that a new advertisement has been detected. In some cases a corresponding segment of the original broadcast signal could be retrieved and played for a human to perform an identification.
The process of
In query 602, the system determines whether the received work can be identified. If the work can be identified, the work may be identified in act 604. The determination and identification acts may be performed as disclosed above.
If the received work cannot be identified, then the unidentified segment may be reported to the system in act 606. It is contemplated that the unidentified segment may be indexed and cataloged. Additionally, a list of unidentified segments may be generated.
In query 702, it is determined whether the received unidentified segment is similar to any part of any previously received unidentified segment. In one embodiment, the analysis performed in query 702 may comprise decomposing each unidentified segment into a series of overlapping 5-second sub segments and comparing each unidentified sub segment against other unidentified sub segments. It is contemplated that a wide variety of similarity measurement techniques may be used, such as those used to identify segments as disclosed above. For example, a threshold for similarity may comprise the vector distance between unidentified segments computed as disclosed above. The choice of similarity measurement may dictate the length of the matching sub segments discovered.
If the unidentified segment is not determined to be similar to a previously received unidentified segment, then the segment may be indexed and cataloged in act 704. Such a segment may then serve as a reference against which future unidentified segments may be compared.
If an unidentified segment is determined to be similar to a previously received unidentified segment, the system may conclude that similar unidentified segments may be performances of the same work, e.g., from the same master recording. When the similarity comparison process indicates that the unidentified sub segment is from the same work as another unidentified sub segment, then the system may attempt to extend the length of the similar unidentified segments by ‘stitching’ together contiguous unidentified sub segments which also meet the criteria of being performances of the same work. These extended segments consisting of similar earlier and later unidentified segments is referred to herein as “super segments”.
Groups of super segments may be created which consist of contiguous runs of unidentified segments collected from one or more media streams that may all be performances of the same work. It is contemplated that super segments may comprise any length, and may preferably have a length corresponding to standard media lengths such as 15 seconds, 30 seconds, 60 seconds, 13 minutes, or even an hour. Of course, other lengths may be used.
In a further exemplary embodiment, once a super segment has been created, it will be included in the process of
These repeating segments may contain valuable information and may be reported on. In one embodiment, super segments may be reported on by length. For example, any repeating segments less than 63 seconds in length may represent advertisements, news segments or station promotions. In another embodiment, any repeating segments between 2 and 15 minutes may indicate a song. Additionally, longer repeating segments may indicate an entire broadcast is being repeated, such as a radio talk show or TV show.
It is contemplated that the ID server as disclosed herein may perform the process of
Often a substantial time interval will pass between performances of a work over a given media stream. However, the same work is often performed on several different media streams. The time between performances of the same work on different media streams may be far less than the time between performances of the work on any one media stream. Furthermore, advertisements may often play concurrently over several different media streams as the advertiser tries to achieve great consumer impact. Thus, the system described herein will preferably recognize a new work as soon as it is performed a second time on any monitored media stream.
In a further aspect, the unidentified segments and super segments may be transmitted back to the analysis module which generated the representation. This may be advantageous where it is desired to generate a new work playlist for the particular analysis module's location or user. Thus, the ID server may be configured to provide unidentified segments or super segments back to the source analysis module. In this case, the source analysis module may decide to hold the original source audio corresponding to the new work super segment for future identification through more traditional, human based, methods.
Furthermore, it is contemplated that the ID server or analysis module may be configured to send information regarding detected new works to third parties, such as third-party servers. Additionally, the ID server or analysis module may be configured to provide an electronic notification to third parties of new work detection made by the ID server. Examples of electronic notifications may include email, HTTP POST transactions, or other electronic communication as is known in the art. As is known by those skilled in the art, these electronic notifications may be used to initiate an action based on their content. For example, such notifications may allow the new works playlist to be accessed in real-time or as desired. The identification of a new work may be used to raise an alert that a new advertisement, song, or news clip has just been released to media casters.
It is contemplated that the ID server may be configured to provide customized new work playlists containing information tailored to a customer's individual needs. For example, a customer may wish to be notified whenever a new work with certain characteristics, as described above, is detected, or whenever a particular type of new work is detected on a particular media outlet. For example, new works reports may be generated which classify super segments based on length. Customers may wish to have complete new work playlists provided to them periodically at desired intervals that may include statistics known in the art. By using the system as disclosed herein, such requests may be satisfied automatically in real-time, or at whatever interval may be desired. It is to be understood that any of the aspects of the present disclosure may be performed in real time or as often as desired.
While embodiments and applications have been shown and described, it would be apparent to those skilled in the art that many more modifications and improvements than mentioned above are possible without departing from the inventive concepts herein. The disclosure, therefore, is not to be restricted except in the spirit of the appended claims.
This application is a continuation-in-part of U.S. application Ser. No. 09/910,680, filed Jul. 20, 2001.
Number | Name | Date | Kind |
---|---|---|---|
3919479 | Moon et al. | Nov 1975 | A |
4230990 | Lert et al. | Oct 1980 | A |
4449249 | Price | May 1984 | A |
4450531 | Kenyon et al. | May 1984 | A |
4454594 | Hefron et al. | Jun 1984 | A |
4677455 | Okajima | Jun 1987 | A |
4677466 | Lert et al. | Jun 1987 | A |
4739398 | Thomas et al. | Apr 1988 | A |
4843562 | Kenyon et al. | Jun 1989 | A |
4918730 | Schulze | Apr 1990 | A |
5210820 | Kenyon | May 1993 | A |
5247688 | Ishigami | Sep 1993 | A |
5283819 | Glick et al. | Feb 1994 | A |
5327521 | Savic et al. | Jul 1994 | A |
5437050 | Lamb et al. | Jul 1995 | A |
5442645 | Ugon | Aug 1995 | A |
5504518 | Ellis et al. | Apr 1996 | A |
5581658 | O'Hagan et al. | Dec 1996 | A |
5588119 | Vincent | Dec 1996 | A |
5612729 | Ellis et al. | Mar 1997 | A |
5612974 | Astrachan | Mar 1997 | A |
5613004 | Cooperman et al. | Mar 1997 | A |
5638443 | Stefik et al. | Jun 1997 | A |
5692213 | Goldberg et al. | Nov 1997 | A |
5701452 | Siefert | Dec 1997 | A |
5710916 | Barbara et al. | Jan 1998 | A |
5724605 | Wissner | Mar 1998 | A |
5732193 | Aberson | Mar 1998 | A |
5850388 | Anderson et al. | Dec 1998 | A |
5918223 | Blum et al. | Jun 1999 | A |
5924071 | Morgan et al. | Jul 1999 | A |
5930369 | Cox et al. | Jul 1999 | A |
5943422 | Van Wie et al. | Aug 1999 | A |
5949885 | Leighton | Sep 1999 | A |
5959659 | Dokic | Sep 1999 | A |
5983176 | Hoffert et al. | Nov 1999 | A |
6006183 | Lai et al. | Dec 1999 | A |
6006256 | Zdepski et al. | Dec 1999 | A |
6011758 | Dockes et al. | Jan 2000 | A |
6026439 | Chowdhury et al. | Feb 2000 | A |
6044402 | Jacobson et al. | Mar 2000 | A |
6067369 | Kamei | May 2000 | A |
6088455 | Logan et al. | Jul 2000 | A |
6092040 | Voran | Jul 2000 | A |
6096961 | Bruti et al. | Aug 2000 | A |
6118450 | Proehl et al. | Sep 2000 | A |
6192340 | Abecassis | Feb 2001 | B1 |
6195693 | Berry et al. | Feb 2001 | B1 |
6229922 | Sasakawa et al. | May 2001 | B1 |
6243615 | Neway et al. | Jun 2001 | B1 |
6243725 | Hempleman et al. | Jun 2001 | B1 |
6253193 | Ginter et al. | Jun 2001 | B1 |
6253337 | Maloney et al. | Jun 2001 | B1 |
6279010 | Anderson | Aug 2001 | B1 |
6279124 | Brouwer et al. | Aug 2001 | B1 |
6285596 | Miura et al. | Sep 2001 | B1 |
6330593 | Roberts et al. | Dec 2001 | B1 |
6345256 | Milsted et al. | Feb 2002 | B1 |
6374260 | Hoffert et al. | Apr 2002 | B1 |
6385596 | Wiser et al. | May 2002 | B1 |
6418421 | Hurtado et al. | Jul 2002 | B1 |
6422061 | Sunshine et al. | Jul 2002 | B1 |
6438556 | Malik et al. | Aug 2002 | B1 |
6449226 | Kumagai | Sep 2002 | B1 |
6452874 | Otsuka et al. | Sep 2002 | B1 |
6453252 | Laroche | Sep 2002 | B1 |
6460050 | Pace et al. | Oct 2002 | B1 |
6463508 | Wolf et al. | Oct 2002 | B1 |
6477704 | Cremia | Nov 2002 | B1 |
6487641 | Cusson | Nov 2002 | B1 |
6490279 | Chen et al. | Dec 2002 | B1 |
6496802 | van Zoest et al. | Dec 2002 | B1 |
6526411 | Ward | Feb 2003 | B1 |
6542869 | Foote | Apr 2003 | B1 |
6550001 | Corwin et al. | Apr 2003 | B1 |
6550011 | Sims, III | Apr 2003 | B1 |
6552254 | Hasegawa et al. | Apr 2003 | B2 |
6591245 | Klug | Jul 2003 | B1 |
6609093 | Gopinath et al. | Aug 2003 | B1 |
6609105 | Van Zoest et al. | Aug 2003 | B2 |
6628737 | Timus | Sep 2003 | B1 |
6636965 | Beyda et al. | Oct 2003 | B1 |
6654757 | Stern | Nov 2003 | B1 |
6732180 | Hale et al. | May 2004 | B1 |
6771316 | Iggulden | Aug 2004 | B1 |
6771885 | Agnihotri et al. | Aug 2004 | B1 |
6834308 | Ikezoye et al. | Dec 2004 | B1 |
6947909 | Hoke, Jr. | Sep 2005 | B1 |
6968337 | Wold | Nov 2005 | B2 |
7043536 | Philyaw et al. | May 2006 | B1 |
7047241 | Erickson et al. | May 2006 | B1 |
7058223 | Cox et al. | Jun 2006 | B2 |
7181398 | Thong et al. | Feb 2007 | B2 |
7266645 | Garg et al. | Sep 2007 | B2 |
7269556 | Kiss et al. | Sep 2007 | B2 |
7281272 | Rubin et al. | Oct 2007 | B1 |
7289643 | Brunk et al. | Oct 2007 | B2 |
7349552 | Levy et al. | Mar 2008 | B2 |
7363278 | Schmelzer et al. | Apr 2008 | B2 |
7500007 | Ikezoye et al. | Mar 2009 | B2 |
7529659 | Wold | May 2009 | B2 |
7562012 | Wold | Jul 2009 | B1 |
7565327 | Schmelzer | Jul 2009 | B2 |
7593576 | Meyer et al. | Sep 2009 | B2 |
20010013061 | DeMartin et al. | Aug 2001 | A1 |
20010027522 | Saito | Oct 2001 | A1 |
20010034219 | Hewitt et al. | Oct 2001 | A1 |
20010037304 | Paiz | Nov 2001 | A1 |
20010056430 | Yankowski | Dec 2001 | A1 |
20020049760 | Scott et al. | Apr 2002 | A1 |
20020064149 | Elliott et al. | May 2002 | A1 |
20020082999 | Lee et al. | Jun 2002 | A1 |
20020087885 | Peled et al. | Jul 2002 | A1 |
20020120577 | Hans et al. | Aug 2002 | A1 |
20020123990 | Abe | Sep 2002 | A1 |
20020129140 | Peled et al. | Sep 2002 | A1 |
20020133494 | Goegdken | Sep 2002 | A1 |
20020152262 | Arkin et al. | Oct 2002 | A1 |
20020156737 | Kahn et al. | Oct 2002 | A1 |
20020158737 | Yokoyama | Oct 2002 | A1 |
20020186887 | Rhoads | Dec 2002 | A1 |
20020198789 | Waldman | Dec 2002 | A1 |
20030014530 | Bodin et al. | Jan 2003 | A1 |
20030018709 | Schrempp et al. | Jan 2003 | A1 |
20030023852 | Wold | Jan 2003 | A1 |
20030033321 | Schrempp et al. | Feb 2003 | A1 |
20030037010 | Schmelzer et al. | Feb 2003 | A1 |
20030061352 | Bohrer et al. | Mar 2003 | A1 |
20030061490 | Abajian | Mar 2003 | A1 |
20030095660 | Lee et al. | May 2003 | A1 |
20030135623 | Schrempp | Jul 2003 | A1 |
20030191719 | Ginter et al. | Oct 2003 | A1 |
20030195852 | Campbell et al. | Oct 2003 | A1 |
20040008864 | Watson et al. | Jan 2004 | A1 |
20040010495 | Kramer et al. | Jan 2004 | A1 |
20040053654 | Kokumai et al. | Mar 2004 | A1 |
20040073513 | Stefik et al. | Apr 2004 | A1 |
20040089142 | Georges et al. | May 2004 | A1 |
20040133797 | Arnold | Jul 2004 | A1 |
20040148191 | Hoke, Jr. | Jul 2004 | A1 |
20040163106 | Schrempp et al. | Aug 2004 | A1 |
20040167858 | Erickson | Aug 2004 | A1 |
20040201784 | Dagtas et al. | Oct 2004 | A9 |
20050021783 | Ishii | Jan 2005 | A1 |
20050039000 | Erickson | Feb 2005 | A1 |
20050044189 | Ikezoye et al. | Feb 2005 | A1 |
20050097059 | Shuster | May 2005 | A1 |
20050154678 | Schmelzer | Jul 2005 | A1 |
20050154680 | Schmelzer | Jul 2005 | A1 |
20050154681 | Schmelzer | Jul 2005 | A1 |
20050216433 | Bland et al. | Sep 2005 | A1 |
20050267945 | Cohen et al. | Dec 2005 | A1 |
20050289065 | Weare | Dec 2005 | A1 |
20060034177 | Schrempp | Feb 2006 | A1 |
20060062426 | Levy et al. | Mar 2006 | A1 |
20070074147 | Wold | Mar 2007 | A1 |
20070078769 | Way et al. | Apr 2007 | A1 |
20080008173 | Kanevsky et al. | Jan 2008 | A1 |
20080133415 | Ginter et al. | Jun 2008 | A1 |
20080141379 | Schmelzer | Jun 2008 | A1 |
20080154730 | Schmelzer | Jun 2008 | A1 |
20080155116 | Schmelzer | Jun 2008 | A1 |
20090030651 | Wold | Jan 2009 | A1 |
20090031326 | Wold | Jan 2009 | A1 |
20090043870 | Ikezoye et al. | Feb 2009 | A1 |
20090077673 | Schmelzer | Mar 2009 | A1 |
20090089586 | Brunk et al. | Apr 2009 | A1 |
20090192640 | Wold | Jul 2009 | A1 |
20090240361 | Wold et al. | Sep 2009 | A1 |
20090328236 | Schmelzer | Dec 2009 | A1 |
Number | Date | Country |
---|---|---|
0349106 | Jan 1990 | EP |
0 402 210 | Dec 1990 | EP |
0 459 046 | Dec 1991 | EP |
0 517 405 | Dec 1992 | EP |
0 402 210 | Aug 1995 | EP |
0689316 | Dec 1995 | EP |
0731446 | Sep 1996 | EP |
0 859 503 | Aug 1998 | EP |
0 859 503 | Dec 1999 | EP |
1 449 103 | Aug 2004 | EP |
1 485 815 | Dec 2004 | EP |
1 593 018 | Nov 2005 | EP |
1354276 | Dec 2007 | EP |
1485815 | Oct 2009 | EP |
9636163 | Nov 1996 | WO |
9636163 | Nov 1996 | WO |
9820672 | May 1998 | WO |
9820672 | May 1998 | WO |
0005650 | Feb 2000 | WO |
0039954 | Jul 2000 | WO |
WO 0063800 | Oct 2000 | WO |
0123981 | Apr 2001 | WO |
WO 0162004 | Aug 2001 | WO |
WO 0203203 | Jan 2002 | WO |
0215035 | Feb 2002 | WO |
0215035 | Feb 2002 | WO |
WO 0215035 | Feb 2002 | WO |
0237316 | May 2002 | WO |
0237316 | May 2002 | WO |
02082271 | Oct 2002 | WO |
03007235 | Jan 2003 | WO |
03009149 | Jan 2003 | WO |
03036496 | May 2003 | WO |
03067459 | Aug 2003 | WO |
WO 03091990 | Nov 2003 | WO |
2004044820 | May 2004 | WO |
WO 2004070558 | Aug 2004 | WO |
WO 2006015168 | Feb 2006 | WO |
WO 2009017710 | Feb 2009 | WO |
Number | Date | Country | |
---|---|---|---|
20030033321 A1 | Feb 2003 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 09910680 | Jul 2001 | US |
Child | 09999763 | US |