Dual compression voice recordation non-repudiation system

Abstract
A dual compression voice recordation non-repudiation system provides a voice recognition system that compresses voice samples for two purposes: voice recognition and human communication. The user navigates through menus displayed to a user through a television set using both voice commands and button presses on a remote control. The invention accepts voice samples from the remote control and compresses the voice sample for voice recognition while a copy of the voice sample is stored on a storage device. Compressed voice samples are sent to a Voice Engine that performs voice recognition on the voice sample to verify if it is from an authorized user from the consumer's household and determines the action required. If the command is to form a contractual agreement or make a purchase, the Voice Engine determines the merchant server that is appropriate for the action and sends the action request to the server. The voice sample is compressed for human communication and stored on a storage device along with any additional information. The stored human communication compressed sample and any additional information on the storage device may be later retrieved and the human communication compressed sample decompressed into a form that can be played back when a user attempts to repudiate a contractual agreement or purchase. Alternatively, the invention performs both compressions at the same time.
Description
BACKGROUND OF THE INVENTION

1. Technical Field


The invention relates to voice recognition in a computer environment. More particularly, the invention relates to recording, compressing, and recognizing voice samples for non-repudiation purposes in a computer environment.


2. Description of the Prior Art


There is currently a push among set-top manufacturers to produce set-tops that extend beyond the television, video, and Internet realm. Television set-top boxes that deliver cable television signals to viewers are commonplace, as well as pay-per-view services for premium viewers.


WebTV has attempted to make headway into consumer's living rooms for many years, offering consumers the ability to surf the Internet through their television sets. America Online has announced an AOLTV which will provide the viewer with both cable television services, Digital Video Recorder features, and Internet access. UltimateTV has recently released a set-top box that tries to provide the same services as AOLTV.


Every one of these approaches require that a keyboard and mouse are connected to the set-top box in order to interact with the user interfaces. Commands, information, and URLs are entered using the keyboard, while the mouse is used to traverse clickable menus and hyperlinks.


One of the problems with the use of keyboards and mice is that they are cumbersome and require that the user be computer literate and have some semblance of manual dexterity. Computer-phobic and certain handicapped consumers typically stray away from these type of set-top boxes for those reasons.


Another problem, particularly in the pay-per-view arena, is that consumers will order a movie and, after the movie is viewed, will later call the provider and complain that they never ordered the movie and demand a refund. The pay-per-view provider loses a large amount of revenue when customers falsely repudiate their purchases. The provider typically has no alternative but to refund the customer's charge because there is no proof that it was in fact the customer that had ordered the movie in the first place.


A method of creating a verifiable trail that clearly identifies of the person that initiated and confirmed the purchase is needed. The use of voice recognition and commands to navigate through user interface menus, pay-per-view menus, ecommerce purchases, and the Internet has not been used in the set-top arena. The ability to demonstrate to the customer that he did make the purchase by playing, to the customer, a recording of his voice as he made the actual purchase would solve the problem of customers falsely or mistakenly repudiating purchases. This would allow the providers to reliably retain their revenue stream.


It would be advantageous to provide a dual compression voice recordation non-repudiation system that allows providers to reliably identify users through voice recognition and to use the user's voice for non-repudiation purposes. It would further be advantageous to provide a dual compression voice recordation non-repudiation system that performs compression techniques on voice samples for both voice recognition and human communication.


SUMMARY OF THE INVENTION

The invention provides a dual compression voice recordation non-repudiation system. The system allows providers to reliably identify users through voice recognition and to use the user's voice for non-repudiation purposes. In addition, the invention provides both voice recognition and human communication compression techniques for voice samples.


A preferred embodiment of the invention provides a voice recognition system that compresses voice samples for two purposes: voice recognition and human communication. Menus are displayed to a user through a television set or monitor. The user navigates through menu trees using both voice commands and button presses on a remote control.


The invention accepts voice samples from the remote control and compresses the voice sample for voice recognition. A copy of the voice sample is stored on a storage device.


Compressed voice samples are placed into packets and sent to a Voice Engine that performs voice recognition on the voice sample to determine if it is from an authorized user from the consumer's household. Once verified, the voice recognition sample is further processed to determine the action required.


If the command is to form a contractual agreement or make a purchase, the Voice Engine determines the merchant server that is appropriate for the action and sends the action request to the server. Once the action is performed, a transaction confirmation is displayed to the user. The voice sample is compressed for human communication and sent to the Voice Engine along with other information such as the last n utterance samples or the last n button presses.


The Voice Engine stores the human communication compressed sample on a storage device along with any additional information. The stored the human communication compressed sample and any additional information on the storage device may be later retrieved and the human communication compressed sample decompressed into a form that can be played back when a user attempts to repudiate a contractual agreement or purchase.


Alternatively, the invention can perform both compressions at the same time, thus bypassing the step of having to store the voice sample onto the storage device.


The user may be required to speak a challenge phrase or command phrase to complete an agreement or transaction. The Voice Engine then stores a copy of the human communication compressed sample of the challenge phrase or command phrase on the storage device for later retrieval for non-repudiation purposes.


Other aspects and advantages of the invention will become apparent from the following detailed description in combination with the accompanying drawings, illustrating, by way of example, the principles of the invention.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 is a block schematic diagram showing the interaction of a voice remote control and a preferred embodiment of the invention in conjunction with a set-top box according to the invention;



FIG. 2 is a block schematic diagram of a preferred embodiment of the invention showing the flow of information between components of the invention and other systems according to the invention;



FIG. 3 is a block schematic diagram of a telephone based implementation of the invention according to the invention;



FIG. 4 is a block schematic diagram of a task-oriented viewpoint of a preferred embodiment of the invention illustrating the client system tasks according to the invention; and



FIG. 5 is a block schematic diagram of a task-oriented viewpoint of a preferred embodiment of the invention illustrating the voice engine server tasks according to the invention.





DETAILED DESCRIPTION OF THE INVENTION

The invention is embodied in a dual compression voice recordation non-repudiation system in a computer environment. A system according to the invention allows service providers and merchants to reliably identify users through voice recognition and to use the user's voice for non-repudiation purposes. In addition, the invention provides both voice recognition and human communication compression techniques for voice samples.


The invention provides a voice identification system that performs voice identification of a user and, upon verification, records the user's voice commands. Two types of voice compression are performed automatically, one for voice recognition and one for recording. The user's recorded voice is later used for non-repudiation contractual purposes when a user calls to cancel an order or challenge an option selection.


Referring to FIG. 1, a preferred embodiment of the invention provides a remote control 101 with a microphone 103 and a push to talk button 102. The user speaks 104 into the microphone 103 and the remote control 101 transmits voice commands and samples 105 via IR, RF, or other means, to a voice sampler 106 connected to or incorporated in a set-top box 107. The set-top box 107 is connected to a television set 108 where user interface menus and screens are displayed 109 to the user.


With respect to FIG. 2, voice samples from the remote control 201 are sent to the voice sampler/compressor 202 which has a two-way serial connection to a set-top box 203. The set-top box 203 is connected to a television set 204 and a hybrid fiber coax (HFC) network 205. The HFC network 205 connects the set-top box with a voice engine 206. The voice samples are transmitted from the set-top box 203 through the HFC network 205 to the voice engine 206 that contains multiple processors.


The voice sampler/compressor 202 performs two different types of compression on audio samples. The incoming audio sample bit rate is very high quality e.g., 16 bits*16 kHz≈30 kilo bytes per second (kbs), while the upstream bit rate across the HFC network 205 is 4.8 kbs. This means that the samples being sent upstream across the HFC network 205 must be compressed aggressively. The invention compresses voice samples in two ways: for voice recognition (cepstrals); and for human communication. Compressing for voice recognition is not invertible, i.e., the compressed samples can be understood by the speech recognition system but are not intelligible by a human being. Compressing for human communication is used, for example, for playback purposes such as for non-repudiation of contracts.


Speech compression is well known in the art. The invention provides a system that performs voice recognition compression or voice compression for human communication on demand.


A preferred embodiment of the invention configures the voice sampler/compressor 202 to, by default, compress for voice recognition. When the voice sample comes down to the voice sampler/compressor 202, the voice sampler/compressor 202 saves a copy of the voice sample, compresses the voice sample for voice recognition, and sends the voice recognition compressed samples through the set-top box 203 to the voice engine 206. The voice engine 206 analyzes the voice sample to determine the user's identity. If the voice sample is from a valid user, then the sample is used to instruct the voice engine 206 to perform an action such as purchase an item or service.


The voice engine 206 then commands the appropriate vendor server 208, 210 to perform the requested action. Once the action is performed, the voice engine 206 requests a non-repudiation copy of the voice sample from the voice sampler/compressor 202. The voice sampler/compressor 202 retrieves the stored copy of the voice sample, compresses the voice sample for human communication, and sends the human communication compressed sample to the voice engine 206.


The voice engine 206 then stores the human communication compressed sample on a storage device 207, along with other pertinent data such as time stamps, previous button presses, etc. The stored human communication compressed sample can be later retrieved by, for example, a customer service representative 212 for use when the customer attempts to repudiate on a purchase. When the voice sample is needed, the customer service system 212 filters the stored sample through a decompressor 211. The decompressor 211 decompresses the human communication compressed voice sample to a form that can be played back to the customer.


As an example, a user says “buy Dumbo,” into the remote control 201. The voice utterance is sent to the voice sampler/compressor 202. The voice sampler/compressor 202 stores a copy of the voice utterance, compresses the sample for voice recognition, and sends the voice recognition compressed sample to the voice engine 206.


The voice engine 206 identifies that the voice utterance is “buy Dumbo”, the voice engine 206 tells the video on demand server 208 to purchase the movie Dumbo. The voice engine 206 then requests a non-repudiation sample from the voice sampler/compressor 202.


The voice sampler/compressor 202 retrieves the copy of the user's last n utterances and compresses the n utterances for human communication. The voice sampler/compressor 202 then sends the samples, compressed for human communication, to the voice engine 206. The voice engine 206 stores the samples compressed for human communication onto a non-volatile storage device 207. The sample can also be stored redundantly so the sample cannot be lost.


Later on, the user calls the provider's customer service 212 to complain that he did not purchase the movie Dumbo. The customer service 212 finds the record of the user's purchase on the storage device 207. Customer service 212 retrieves the compressed voice sample from the storage device 207. The compressed voice sample is sent to the decompressor 211 and the resulting voice sample is played back to the user to prove that he did indeed, order the movie Dumbo.


Another preferred embodiment of the invention allows the user to manipulate a series of menus on the television screen 204. When the user wants to purchase a service or product, he highlights a purchase button and selects it. The system then asks the user to say specific challenge phrase into the remote control 201, e.g., “I want to buy this Acme dishwasher” or “I confirm the purchase of this Acme dishwasher,” to confirm the action.


The voice sample is saved and compressed by the voice sampler/compressor 202 as described above. The voice sampler/compressor 202 receives the command, sends a voice recognition compressed sample to the voice engine 206. The voice engine 206 confirms that the challenge phrase is correct. Once the challenge phrase is identified, the voice engine 206 sends the purchase command to the appropriate vendor server, e.g., through the Internet 209 to a ecommerce vendor 210. The invention can also confirm that the identity of the voice is an authorized person.


The voice engine 206 requests a human communication compressed version of the voice sample from voice sampler/compressor 202. In response, the voice sampler/compressor 202 retrieves the stored voice sample, compresses it for human communication, and sends it to the voice engine 206. The voice engine 206 stores the voice sample on the non-volatile storage device 207 and sends the purchase confirmation to the user.


In yet another preferred embodiment of the invention, the voice sampler/compressor 202 simultaneously compresses the voice sample for both voice recognition and human communication. Both compressed samples are then sent to the voice engine 206. The voice engine 206 does not have to make a request for the human communication compressed sample later on.


Alternatively, the voice sampler/compressor 202 could instead perform both compressions, but store the human communication compressed sample while sending the voice recognition compressed sample to the voice engine 206. The voice sampler/compressor 202 does not have to store a copy of the original voice sample.


A further preferred embodiment of the invention requires the user to say something during each step of the purchase, e.g., “movies,” “children's movies,” “Dumbo,” “buy Dumbo,” thus logging the user's progression. Each time the user speaks and progresses deeper into a menu set up to the purchase point, the voice samples are stored by the voice engine for later proof of purchase in case the user repudiates the transaction.


Having a reliable, verifiable means to confirm customer identities allows even wider applications. For example, the privacy policies concerning the use of private information are very sensitive issues in the marketplace today. Consumers must opt-in or opt-out of certain privacy policies, depending on which country or state that they reside in. The invention is easily adaptable to recognize and store a consumer's response to opting in, neutral, or out of a privacy policy. The consumer can visit the Web site of a retailer, for example, read the Web site's privacy policy and then verbally respond to the options for opting in or out. The consumer's voice is later used as proof of the consumer's privacy choice.


Referring to FIG. 3, one skilled in the art will readily appreciate that telephone based systems are also easily adaptable using the invention. The invention, for example, is easily adapted to take telephone opt-in or opt-out statements from a consumer and to store the voice samples as proof of the consumer's choice. The consumer speaks, telling the system what his selection is through the telephone 301. The voice sampler/compressor 302 sends the compressed voice sample to the voice engine. The voice engine 303 confirms that the consumer has selected one of the available options and stores the human communication compressed sample onto the storage device 304. If the customer service system 306 needs to prove that the consumer has made a certain selection, it will retrieve the voice sample from the storage device 304 and filter it through a decompressor 305 for play back to the consumer.


With respect to FIGS. 4 and 5, a task viewpoint of the invention is shown. Voice samples from a remote control are received by the Receive Voice Samples module 401. The Voice Sampler Manager 402 receives the voice samples from the Receive Voice Samples module 401. When a sample is received, the Voice Sampler Manager 402 sends a copy to the Voice Recognition Compressor 405 which compresses the voice sample for voice recognition and sends the compressed sample back to the Voice Sampler Manager 402.


The Voice Sampler Manager 402 is aware of what menu is displayed through the Display Menu Manager 403. The Voice Sampler Manager 402 places the voice recognition compressed sample in a packet and sends it to the Receive Voice Packets module 501 and stores a copy of the voice sample on the storage device 407. Compressed voice samples are placed into packets by the Voice Sampler Manager 402 and may contain additional information to identify the user's location, ID number, merchant, last n button presses, etc.


The Receive Voice Packets module 501 receives voice packets and forwards them to the Voice Engine Manager 502. Voice recognition compressed samples are sent to the Voice Recognizer 504. The Voice Recognizer 504 determines if the voice sample is from an authorized user from the consumer's household. Once verified, the voice recognition sample is sent to the Command Converter 503 to determine the action required.


Menu navigation commands are sent by the Voice Engine Manager 502 to the Display Menu Manager 403 via the Voice Sampler Manager 402. The Display Menu Manager 403 displays the resulting menu provided by the Voice Engine Manager 502 or, alternatively, from its local menu tree.


If the command is to form a contractual agreement or make a purchase, the Voice Engine Manager 502 determines the merchant server that is appropriate for the action and sends the action request to the server. Once the action is performed, the Voice Engine Manager 502 sends the transaction confirmation and a request for the human communication compressed sample of the voice sample to the Voice Sampler Manager 402.


Transaction confirmations are displayed to the user through the Display Menu Manager 403. The Voice Sampler Manager 402 retrieves the voice sample and possibly the last n utterance samples from the storage device 407 and sends it to the Human Communication Compressor 406. Samples are compressed for human communication by the Human Communication Compressor 406 and assembled into packets (which may also contain additional information such as the last n button presses) and sent to the Receive Voice Packet module 501 by the Voice Sampler Manager 402.


Voice packets are forwarded by the Receive Voice Packets module 501 to the Voice Engine Manager 502. The Voice Engine Manager 502 stores the human communication compressed sample on the storage device 506 along with any additional information (such as the last n button presses).


The stored the human communication compressed sample and any additional information on the storage device 506 may be later retrieved by the Decompressor 505 for decompressing the human communication compressed sample into a form that can be played back


As noted above, the Voice Sampler Manager 402 can perform both compressions at the same time, thus bypassing the step of having to store the voice sample onto the storage device 407. In that case, both compressed samples are sent to the Receive Voice Packets module 501 without the Voice Engine Manager 502 requesting the human communication compressed sample.


Alternatively, the Voice Sampler Manager 402 is aware of what menu is displayed through the Display Menu Manager 403. If a command is expected, the Voice Sampler Manager 402 sends the voice recognition compressed sample to the Command Converter 404 to check if a valid command has been spoken. Valid commands are then executed through the Display Menu Manager 403.


If a challenge phrase or command phrase (e.g., “buy Dumbo”) is expected, then the Voice Sampler Manager 402 places the voice recognition compressed sample in a packet and sends it to the Receive Voice Packets module 501 and stores a copy of the voice sample on the storage device 407.


If the user is traversing a menu tree that leads to a purchase, for example, the Voice Sampler Manager 402 can save the voice sample on the storage device 407 for later retrieval and will continue through the menu tree.


One skilled in the art will readily appreciate that although the voice sampler/compressor and voice engine functionalities are described separately above, both the voice sampler/compressor and voice engine can reside on the same physical machine.


Although the invention is described herein with reference to the preferred embodiment, one skilled in the art will readily appreciate that other applications may be substituted for those set forth herein without departing from the spirit and scope of the present invention. Accordingly, the invention should only be limited by the claims included below.

Claims
  • 1. A process for a dual-compression voice recognition system for non-repudiation of contractual agreements in a computer environment, comprising the steps of: providing a video content head-end voice processing engine configured for receiving requests to navigate through available content and for purchasing content products from an on-line merchant by identifying the content in a natural language and speaking a challenge phrase;providing user interface means for displaying a plurality of menus on a television screen to said user;receiving a voice sample from a user indicating an action and speaking said challenge phrase into a voice sample reception means;concurrently compressing said voice sample in two ways, comprising both: compressing said voice sample according to a first compression technique to yield an Automatic Speech Recognition (ASR) compression, wherein ASR compression is configured for processing by an automatic speech recognition engine, wherein said first compression technique involves using a cepstrum to minimize bandwidth, wherein a cepstral compression is noninvertible; andcompressing said voice sample according to a second compression technique to yield a human communication compression, wherein said human communication compression is formed using an invertible compression process;transmitting both said ASR compression and said human communication compression through a set-top box and across a hybrid fiber coax network to said head-end voice processing engine, and wherein said head-end voice processing engine is configured for performing the steps of: analyzing said ASR compression to determine if said ASR compression is from an authorized user;correlating said ASR compression with an interface navigation command and a request to make a purchase of content via said plurality of menus;executing said interface navigation command, thereby causing said user interface means to navigate to a menu containing an option to purchase user-specified content on said television screen to said user;executing said request to make a purchase of user-specified content;sending said request to make a purchase of user-specified content to an appropriate merchant server;delivering said user-specified content to said user;receiving a repudiation request from said user alleging that said user did not request to make a purchase of said user-specified content;decompressing said human communication compression for playback; andplaying back said decompressed challenge phrase to said user, thereby confirming the identity of said user as a provider of said request to make a purchase of user-specified content challenge phrase and avoiding repudiation of said request to make a purchase by said user.
  • 2. The process of claim 1, wherein said user interface means displays a confirmation of said action upon completion of said action.
  • 3. The process of claim 1, further comprising the step of: storing additional information with said decompressing said human communication compression that comprises any of said user's location, ID number, merchant, the last predetermined number of utterances, and the last predetermined number of button presses.
  • 4. An apparatus for a dual-compression voice recognition system for non-repudiation of contractual agreements in a computer environment, comprising: a video content head-end voice processing engine configured for receiving requests to navigate through available content and for purchasing content products from an on-line merchant by identifying the content in a natural language and speaking a challenge phrase;a graphical user interface (GUI) for displaying a plurality of menus on a television screen to a user;voice sample reception means for receiving a plurality of voice samples from said user, wherein at least some of said voice sample indicate action and speaking said challenge phrase;a module for concurrently compressing at least one voice sample for voice recognition in two ways, said module comprising both: a first processing means for compressing said voice sample according to a first compression technique to yield an Automatic Speech Recognition (ASR) compression, wherein ASR compression is configured for processing by an automatic speech recognition engine, wherein said first compression technique involves voice recognition using a cepstrum to minimize bandwidth, wherein a cepstral compression is noninvertible; andcompressing said voice sample according to a second compression technique to yield a human communication compression, wherein said human communication compression is formed using an invertible compression process;a set-top box for transmitting both said ASR compression and said human communication compression over a hybrid fiber coax network to a head-end voice engine, wherein said head-end voice engine further comprises; a module for analyzing said ASR compression to determine if said ASR compression is from an authorized user;a module for correlating said ASR compression with an interface navigation command and a request to make a purchase of content via said plurality of menus;a module for executing said interface navigation command, thereby causing said user interface means to navigate to a menu containing an option to purchase user-specified content on said television screen to said user;a module for executing said request to make a purchase of user-specified content;a module for sending said request to make a purchase of user-specified content to an appropriate merchant server;a module for delivering said user-specified content to said user;a module for receiving a repudiation request from said user alleging that said user did not request to make a purchase of said user-specified content;a module for decompressing said human communication compressiona module for playing back said decompressed playback sample to said user, thereby confirming the identity of said user as a provider of said request to make a purchase of user-specified content and challenge phrase or said command phrase.
  • 5. The apparatus of claim 4, wherein said user interface means displays a confirmation of said action upon completion of said action.
  • 6. The apparatus of claim 5, wherein said compressed human communication sample storing module stores additional information with said decompressing said human communication compression that comprises any of said user's location, ID number, merchant, the last predetermined number of utterances, and the last predetermined number of button presses.
CROSS-REFERENCES TO RELATED APPLICATIONS

The present application is a continuation in-part of U.S. patent application Ser. No. 09/785,375 filed Feb. 16, 2001 now U.S. Pat. No. 7,047,196, and claims priority to U.S. Provisional Patent Application Ser. No. 60/504,171, filed Sep. 18, 2003, both of which are incorporated herein in their entirety by this reference thereto.

US Referenced Citations (334)
Number Name Date Kind
4445215 Svendsen Apr 1984 A
4506387 Walter Mar 1985 A
RE32012 Pirz et al. Oct 1985 E
4557550 Beals et al. Dec 1985 A
4633499 Nishioka et al. Dec 1986 A
4653045 Stanley et al. Mar 1987 A
4695944 Zandveld et al. Sep 1987 A
4723238 Isreal et al. Feb 1988 A
4728169 Campbell et al. Mar 1988 A
4741585 Uken May 1988 A
4747652 Campbell et al. May 1988 A
4757541 Beadles Jul 1988 A
4768854 Campbell et al. Sep 1988 A
4780906 Rajasekaran et al. Oct 1988 A
4783809 Glinski Nov 1988 A
4790617 Campbell et al. Dec 1988 A
4797910 Daudelin Jan 1989 A
4815805 Levinson et al. Mar 1989 A
4815817 Levinson Mar 1989 A
4821027 Mallory et al. Apr 1989 A
4821265 Albal et al. Apr 1989 A
4822125 Beals et al. Apr 1989 A
4824199 Uken Apr 1989 A
4827520 Zeinstra May 1989 A
4834482 Campbell et al. May 1989 A
4837830 Wrench et al. Jun 1989 A
4847885 Vittorelli Jul 1989 A
4866778 Baker Sep 1989 A
4889403 Zucker et al. Dec 1989 A
4903258 Kuhlmann et al. Feb 1990 A
4907079 Turner et al. Mar 1990 A
4907225 Gulick et al. Mar 1990 A
4931950 Isle et al. Jun 1990 A
4939728 Markkula, Jr. et al. Jul 1990 A
4949338 Albal et al. Aug 1990 A
4955018 Twitty et al. Sep 1990 A
4959855 Daudelin Sep 1990 A
4962497 Ferenc et al. Oct 1990 A
4972485 Dautrich et al. Nov 1990 A
4981334 Sniadower Jan 1991 A
4983008 Campbell et al. Jan 1991 A
4984237 Franaszek Jan 1991 A
4991217 Garrett et al. Feb 1991 A
4993017 Bachinger et al. Feb 1991 A
5001711 Obana et al. Mar 1991 A
5006987 Harless Apr 1991 A
5007013 Elms Apr 1991 A
5029962 Uken et al. Jul 1991 A
5037170 Uken et al. Aug 1991 A
5080506 Campbell et al. Jan 1992 A
5093827 Franklin et al. Mar 1992 A
5124978 Chao Jun 1992 A
5125024 Gokcen et al. Jun 1992 A
5127003 Doll, Jr. et al. Jun 1992 A
5130984 Cisneros Jul 1992 A
5155760 Johnson et al. Oct 1992 A
5157654 Cisneros Oct 1992 A
5163083 Dowden et al. Nov 1992 A
5165095 Borcherding Nov 1992 A
5181237 Dowden et al. Jan 1993 A
5193132 Uken et al. Mar 1993 A
5197005 Shwartz et al. Mar 1993 A
5199062 Von Meister et al. Mar 1993 A
5208848 Pula May 1993 A
5210754 Takahashi et al. May 1993 A
5220657 Bly et al. Jun 1993 A
5223968 Stringer et al. Jun 1993 A
5225902 McMullan, Jr. Jul 1993 A
5231670 Goldhar et al. Jul 1993 A
5247347 Litteral et al. Sep 1993 A
5255305 Sattar Oct 1993 A
5271088 Bahler Dec 1993 A
5274560 LaRue Dec 1993 A
5315304 Ghaleb et al. May 1994 A
5321400 Sasaki et al. Jun 1994 A
5323064 Bacon et al. Jun 1994 A
5325421 Hou et al. Jun 1994 A
5329608 Bocchieri et al. Jul 1994 A
5333237 Stefanopoulos et al. Jul 1994 A
5333320 Seki Jul 1994 A
5335276 Thompson et al. Aug 1994 A
5345538 Narayannan et al. Sep 1994 A
5349678 Morris et al. Sep 1994 A
5353336 Hou et al. Oct 1994 A
5369575 Lamberti et al. Nov 1994 A
5386556 Hedin et al. Jan 1995 A
5408609 Malgogne et al. Apr 1995 A
5420552 Sakka May 1995 A
5434777 Luciw Jul 1995 A
5442749 Northcutt et al. Aug 1995 A
5446489 Egendorf Aug 1995 A
5475792 Stanford et al. Dec 1995 A
5500920 Kupiec Mar 1996 A
5513298 Stanford et al. Apr 1996 A
5519608 Kupiec May 1996 A
5526034 Hoarty et al. Jun 1996 A
5546119 Bestler et al. Aug 1996 A
5546596 Geist Aug 1996 A
5550578 Hoarty et al. Aug 1996 A
5553119 McAllister et al. Sep 1996 A
5564001 Lewis Oct 1996 A
5566271 Tomitsuka et al. Oct 1996 A
5581555 Dubberly et al. Dec 1996 A
5585858 Harper et al. Dec 1996 A
5590345 Barker et al. Dec 1996 A
5590356 Gilbert Dec 1996 A
5602963 Bissonnette et al. Feb 1997 A
5604739 Buhrgard et al. Feb 1997 A
5608448 Smoral et al. Mar 1997 A
5608624 Luciw Mar 1997 A
5613191 Hylton et al. Mar 1997 A
5615296 Stanford et al. Mar 1997 A
5625814 Luciw Apr 1997 A
5630061 Richter et al. May 1997 A
5630162 Wilkinson et al. May 1997 A
5634086 Rtischev et al. May 1997 A
5642519 Martin Jun 1997 A
5648969 Pasternak et al. Jul 1997 A
5661005 Shattil et al. Aug 1997 A
5664061 Andreshal et al. Sep 1997 A
5666400 McAllister et al. Sep 1997 A
5668929 Foster, Jr. Sep 1997 A
5669008 Galles et al. Sep 1997 A
5671225 Hooper et al. Sep 1997 A
5671328 Fitzpatrick et al. Sep 1997 A
5682539 Conrad et al. Oct 1997 A
5708836 Wilkinson et al. Jan 1998 A
5710935 Barker et al. Jan 1998 A
5717943 Barker et al. Feb 1998 A
5719860 Maison et al. Feb 1998 A
5719921 Vysotsky et al. Feb 1998 A
5721827 Logan et al. Feb 1998 A
5721902 Schultz Feb 1998 A
5721938 Stuckey Feb 1998 A
5726984 Kubler et al. Mar 1998 A
5726990 Shimada et al. Mar 1998 A
5729659 Potter Mar 1998 A
5732216 Logan et al. Mar 1998 A
5734921 Dapp et al. Mar 1998 A
5737495 Adams et al. Apr 1998 A
5742762 Scholl et al. Apr 1998 A
5748841 Morin et al. May 1998 A
5748974 Johnson May 1998 A
5752068 Gilbert May 1998 A
5752232 Basore et al. May 1998 A
5752246 Rogers et al. May 1998 A
5774525 Kanevsky et al. Jun 1998 A
5774841 Salazar et al. Jun 1998 A
5774859 Houser et al. Jun 1998 A
5774860 Bayya et al. Jun 1998 A
5777571 Chuang Jul 1998 A
5793770 St. John et al. Aug 1998 A
5794050 Dahlgren et al. Aug 1998 A
5794205 Walters et al. Aug 1998 A
5802526 Fawcett et al. Sep 1998 A
5805775 Eberman et al. Sep 1998 A
5809471 Brodsky Sep 1998 A
5812977 Douglas Sep 1998 A
5818834 Skierszkan Oct 1998 A
5819220 Sarukkai et al. Oct 1998 A
5819225 Eastwood et al. Oct 1998 A
5821508 Willard Oct 1998 A
5822727 Garberg et al. Oct 1998 A
5826267 McMillan Oct 1998 A
5832063 Vysotsky et al. Nov 1998 A
5832439 Cox, Jr. et al. Nov 1998 A
5842031 Barker et al. Nov 1998 A
5842034 Bolstad et al. Nov 1998 A
5844888 Markkula, Jr. et al. Dec 1998 A
5855002 Armstrong Dec 1998 A
5860059 Aust et al. Jan 1999 A
5864810 Digalakis et al. Jan 1999 A
5864815 Rozak et al. Jan 1999 A
5864819 De Armas et al. Jan 1999 A
5867817 Catallo et al. Feb 1999 A
5870134 Laubach et al. Feb 1999 A
5874939 Galvin Feb 1999 A
5882168 Thompson et al. Mar 1999 A
5883901 Chiu et al. Mar 1999 A
5884265 Squitteri et al. Mar 1999 A
5885083 Ferrell Mar 1999 A
5890122 Van Kleeck et al. Mar 1999 A
5890123 Brown et al. Mar 1999 A
5892813 Morin et al. Apr 1999 A
5893063 Loats et al. Apr 1999 A
5894474 Maison et al. Apr 1999 A
5898827 Hornung et al. Apr 1999 A
5903870 Kaufman May 1999 A
5915001 Uppaluru Jun 1999 A
5917891 Will Jun 1999 A
5920841 Schottmuller et al. Jul 1999 A
5923653 Denton Jul 1999 A
5923738 Cardillo, IV et al. Jul 1999 A
5929850 Broadwin et al. Jul 1999 A
5930341 Cardillo, IV et al. Jul 1999 A
5933607 Tate et al. Aug 1999 A
5933835 Adams et al. Aug 1999 A
5937041 Cardillo, IV et al. Aug 1999 A
5944779 Blum Aug 1999 A
5946050 Wolff Aug 1999 A
5953700 Kanevsky et al. Sep 1999 A
5960209 Blount Sep 1999 A
5960399 Barclay et al. Sep 1999 A
5963557 Eng Oct 1999 A
5963940 Liddy et al. Oct 1999 A
5966528 Wilkinson et al. Oct 1999 A
5970457 Brant et al. Oct 1999 A
5970473 Gerszberg et al. Oct 1999 A
5974385 Ponting et al. Oct 1999 A
5987518 Gotwald Nov 1999 A
5991365 Pizano et al. Nov 1999 A
5991719 Yazaki et al. Nov 1999 A
5999970 Krisbergh et al. Dec 1999 A
6002851 Basavaiah et al. Dec 1999 A
6003072 Gerritsen et al. Dec 1999 A
6009107 Arvidsson et al. Dec 1999 A
6009398 Mueller et al. Dec 1999 A
6009470 Watkins Dec 1999 A
6012030 French-St. George et al. Jan 2000 A
6014817 Thompson et al. Jan 2000 A
6015300 Schmidt, Jr. et al. Jan 2000 A
6018711 French-St. George et al. Jan 2000 A
6021371 Fultz Feb 2000 A
6026388 Liddy et al. Feb 2000 A
6034678 Hoarty et al. Mar 2000 A
6035267 Watanabe et al. Mar 2000 A
6035275 Brode et al. Mar 2000 A
6047319 Olson Apr 2000 A
6078733 Osborne Jun 2000 A
6084572 Yaniger et al. Jul 2000 A
6100883 Hoarty Aug 2000 A
6108565 Scherzer Aug 2000 A
6112245 Araujo et al. Aug 2000 A
6115584 Tait et al. Sep 2000 A
6115737 Ely et al. Sep 2000 A
6118785 Araujo et al. Sep 2000 A
6119162 Li et al. Sep 2000 A
6122362 Smith et al. Sep 2000 A
6128642 Doraswamy et al. Oct 2000 A
6137777 Vaid et al. Oct 2000 A
6141653 Conklin et al. Oct 2000 A
6141691 Frink et al. Oct 2000 A
6145001 Scholl et al. Nov 2000 A
6151319 Dommety et al. Nov 2000 A
6161149 Achacoso et al. Dec 2000 A
6173279 Levin et al. Jan 2001 B1
6181334 Freemen et al. Jan 2001 B1
6182052 Fulton et al. Jan 2001 B1
6182136 Ramanathan et al. Jan 2001 B1
6182139 Brendel Jan 2001 B1
6184808 Nakamura Feb 2001 B1
6185619 Joffe et al. Feb 2001 B1
6192338 Haszto et al. Feb 2001 B1
6192340 Abecassis Feb 2001 B1
6192408 Vahalia et al. Feb 2001 B1
6198822 Doyle et al. Mar 2001 B1
6204843 Freeman et al. Mar 2001 B1
6205582 Hoarty Mar 2001 B1
6208650 Hassell et al. Mar 2001 B1
6212280 Howard, Jr. et al. Apr 2001 B1
6215484 Freemen et al. Apr 2001 B1
6215789 Keenan et al. Apr 2001 B1
6225976 Yates et al. May 2001 B1
6320947 Joyce Nov 2001 B1
6353444 Katta et al. Mar 2002 B1
6369614 Ridgway Apr 2002 B1
6370499 Fujii Apr 2002 B1
6374177 Lee Apr 2002 B1
6381316 Joyce Apr 2002 B2
6401066 McIntosh Jun 2002 B1
6445717 Gibson et al. Sep 2002 B1
6480703 Calderone et al. Nov 2002 B1
6510157 Kwok et al. Jan 2003 B2
6523061 Halverson et al. Feb 2003 B1
6543052 Ogasawara Apr 2003 B1
6570913 Chen May 2003 B1
6573907 Madrane Jun 2003 B1
6633846 Bennett et al. Oct 2003 B1
6658414 Bryan Dec 2003 B2
6661778 Trofin et al. Dec 2003 B1
6668292 Meyer et al. Dec 2003 B2
6711543 Cameron Mar 2004 B2
6714632 Joyce et al. Mar 2004 B2
6721633 Funk Apr 2004 B2
6725022 Clayton Apr 2004 B1
6728531 Lee Apr 2004 B1
6742021 Halverson et al. May 2004 B1
6750792 Azami et al. Jun 2004 B2
6785653 White et al. Aug 2004 B1
6799201 Lee Sep 2004 B1
6889191 Rodriguez et al. May 2005 B2
6892083 Shostak May 2005 B2
6895379 Smyers et al. May 2005 B2
6934963 Reynolds et al. Aug 2005 B1
7013283 Murdock et al. Mar 2006 B1
7047196 Calderone et al. May 2006 B2
7113981 Slate Sep 2006 B2
7233898 Byrnes et al. Jun 2007 B2
7260538 Calderone et al. Aug 2007 B2
7471786 Erhart et al. Dec 2008 B2
20010007149 Smith Jul 2001 A1
20010019604 Joyce Sep 2001 A1
20020012352 Hansson et al. Jan 2002 A1
20020015480 Daswani Feb 2002 A1
20020049535 Rigo Apr 2002 A1
20020052746 Handelman May 2002 A1
20020103656 Bahler et al. Aug 2002 A1
20020106065 Joyce Aug 2002 A1
20020118676 Tonnby et al. Aug 2002 A1
20020146015 Bryan Oct 2002 A1
20020147579 Kushner et al. Oct 2002 A1
20030023444 St. John Jan 2003 A1
20030028380 Freeland Feb 2003 A1
20030033152 Cameron Feb 2003 A1
20030046083 Devinney et al. Mar 2003 A1
20030061039 Levin Mar 2003 A1
20030065427 Funk Apr 2003 A1
20030068154 Zylka Apr 2003 A1
20030073434 Shostak Apr 2003 A1
20030167171 Calderone et al. Sep 2003 A1
20030187646 Smyers et al. Oct 2003 A1
20040064839 Watkins Apr 2004 A1
20040077334 Joyce Apr 2004 A1
20040110472 Witkowski Jun 2004 A1
20040127241 Shostak Jul 2004 A1
20040132433 Stern Jul 2004 A1
20050143139 Park Jun 2005 A1
20050144251 Slate Jun 2005 A1
20050170863 Shostak Aug 2005 A1
20060018440 Watkins Jan 2006 A1
20060050686 Velez-Rivera Mar 2006 A1
20060085521 Sztybel Apr 2006 A1
20060206339 Silvera Sep 2006 A1
20060206340 Silvera Sep 2006 A1
Foreign Referenced Citations (37)
Number Date Country
0576711 May 1994 EP
0621713 Oct 1994 EP
0 782 337 Jul 1997 EP
0285330 May 1998 EP
0895396 Feb 1999 EP
1094406 Apr 2001 EP
1122902 Aug 2001 EP
1341363 Sep 2003 EP
1003018 May 2005 EP
1633150 Mar 2006 EP
1633151 Mar 2006 EP
1742437 Jan 2007 EP
11-98022 Apr 1999 JP
9826595 Jun 1998 WO
9930501 Jun 1999 WO
WO 0003542 Jan 2000 WO
WO0016568 Mar 2000 WO
0024198 Apr 2000 WO
WO0021232 Apr 2000 WO
WO0122112 Mar 2001 WO
WO0122249 Mar 2001 WO
WO0122633 Mar 2001 WO
WO0122712 Mar 2001 WO
WO0122713 Mar 2001 WO
WO0139178 May 2001 WO
WO0157851 Aug 2001 WO
WO0207050 Jan 2002 WO
WO0211120 Feb 2002 WO
WO0217090 Feb 2002 WO
WO 0223900 Mar 2002 WO
WO02097590 Dec 2002 WO
WO2004021149 Mar 2004 WO
WO2004077721 Sep 2004 WO
WO2005079254 Sep 2005 WO
WO2006029269 Mar 2006 WO
WO2006033841 Mar 2006 WO
WO2006098789 Sep 2006 WO
Related Publications (1)
Number Date Country
20050033581 A1 Feb 2005 US
Provisional Applications (1)
Number Date Country
60504171 Sep 2003 US
Continuation in Parts (1)
Number Date Country
Parent 09785375 Feb 2001 US
Child 10943718 US