The present invention relates to a robust and highly reliable system that allows users to browse web sites and retrieve information by using conversational voice commands. Additionally, the present invention allows users to control and monitor other systems and devices that are connected the Internet or any other network by using voice commands.
Currently, three options exist for a user who wishes to gather information from a web site accessible over the Internet. The first option is to use a desktop or a laptop computer connected to a telephone line via a modem or connected to a network with Internet access. The second option is to use a Personal Digital Assistant (PDA) that has the capability of connecting to the Internet either through a modem or a wireless connection. The third option is to use one of the newly designed web-phones or web-pagers that are now being offered on the market. Although each of these options provide a user with access to the Internet to browse web sites, each of them have their own drawbacks.
Desktop computers are very large and bulky and are difficult to transport. Laptop computers solve this inconvenience, but many are still quite heavy and are inconvenient to carry. Further, laptop computers cannot be carried and used everywhere a user travels. For instance, if a user wishes to obtain information from a remote location where no electricity or communication lines are installed, it would be nearly impossible to use a laptop computer. Oftentimes, information is needed on an immediate basis where a computer is not accessible. Furthermore, the use of laptop or desktop computers to access the Internet requires either a direct or a dial-up connection tan an Internet Service Provider (ISP). Oftentimes, such connections are not available when a user desires to connect to the Internet to acquire information.
The second option for remotely accessing web sites is the use of PDAs. These devices also have their own set of drawbacks. First, PDAs with the ability to connect to the Internet and access web sites are not readily available. As a result, these PDAs tend to be very expensive. Furthermore, users are usually required to pay a special service fee to enable the web browsing feature of the PDA. A further disadvantage of these PDAS is that web sites must be specifically designed to allow these devices to access information on the web site. Therefore, a limited number of web sites are available that are accessible by these web-enabled PDAs. Finally, it is very common today for users to carry cell phones, however, users must also carry a separate PDA if they require the ability to gather information from various web sites. Users are therefore subjected to added expenses since they must pay for both cellular telephone service and also for the web-enabling service for the PDA. This results in a very expensive alternative for the consumer.
The third alternative mentioned above is the use of web-phones or web-pagers. These devices suffer many of the same drawbacks as PDAs. First, these devices are expensive to purchase. Further, the number of web sites accessible to these devices is limited since web sites must be specifically designed to allow access by these devices. Furthermore, users are often required to pay an additional fee in order to gain wireless web access. Again, this service is expensive. Another drawback of these web-phones or web-pagers is that as technology develops, the methods used by the various web sites to allow access by these devices may change. These changes may require users to purchase new web-phones or web-pagers or have the current device serviced in order to upgrade the firmware or operating system stored within the device. At the least, this would be inconvenient to users and may actually be quite expensive.
Therefore, a need exists for a system that allows users to easily access and browse the Internet from any location. Such a system would only require users to have access to any type of telephone and would not require users to subscribe to multiple services.
In the rapidly changing area of Internet applications, web sites change frequently. The design of the web site may change, the information required by the web site in order to perform searches may change, and the method of reporting search results may change. Web browsing applications that submit search requests and interpret responses from these web sites based upon expected formats will produce errors and useless responses when such changes occur. Therefore, a need exists for a system that can detect modifications to web sites and adapt to such changes in order to quickly and accurately provide the information requested by a user through a voice enabled device, such as a telephone.
When users access web sites using devices such as personal computers, delays in receiving responses are tolerated and are even expected, however, such delays are not expected when a user communicates with a telephone. Users expect communications over a telephone to occur immediately with a minimal amount of delay time. A user attempting to find information using a telephone expects immediate responses to his search requests. A system that introduces too much delay between the time a user makes a request and the time of response will not be tolerated by users and will lose its usefulness. Therefore, it is important that a voice browsing system that uses telephonic communications selects web sites that provide rapid responses since speed is an important factor for maintaining the system's desirability and usability. Therefore, a need exists for a system that accesses web sites based upon their speed of operation.
It is an object of an embodiment of the present invention to allow users to gather information from web sites by using voice enabled devices, such as wireline or wireless telephones.
An additional object of an embodiment of the present invention is to provide a system and method that allows the searching and retrieving of publicly available information by controlling a web browsing server using naturally spoken voice commands.
It is an object of another embodiment of the present invention to provide a robust voice browsing system that can obtain the same information from several web sites based upon a ranking order. The ranking order is automatically adjusted if the system detects that a given web site is not functioning, is too slow, or has been modified in such a way that the requested information cannot be retrieved any longer.
A still further object of an embodiment of the present invention is to allow users to gather information from web sites from any location where a telephonic connection can be made.
Another object of an embodiment of the present invention is to allows users to browse web sites on the Internet using conversational voice commands spoken into wireless or wireline telephones or other voice enabled devices.
An additional object an embodiment of the present invention is to provide a system and method for using voice commands to control and monitor devices connected to a network.
It is an object of an embodiment of the present invention to provide a system and method which allows devices connected to a network to be controlled by conversational voice commands spoken into any voice enabled device interconnected with the same network.
The present invention relates to a system for acquiring information from sources on a network, such as the Internet. A voice browsing system maintains a database containing a list of information sources, such as web sites, connected to a network. Each of the information sources is assigned a rank number which is listed in the database along with the record for the information source. In response to a speech command received from a user, a network interface system accesses the information source with the highest rank number in order to retrieve information requested by the user.
The a preferred embodiment of the present invention allows users to access and browse web sites when they do not have access to computers with Internet access. This is accomplished by providing a voice browsing system and method that allows users to browse web sites using conversational voice commands spoken into any type of voice enabled device (i.e., any type of wireline or wireless telephone, IP phone, wireless PDA, or other wireless device). These spoken commands are then converted into data messages by a speech recognition software engine running on a user interface system. These data messages are then sent to and processed by a network interface system. This network interface system then generates the proper requests that are transmitted to the desired web site over the Internet. Responses sent from the web site are received and processed by the network interface system and then converted into an audio message via a speech synthesis engine or a pre-recorded audio concatenation application and finally transmitted to the user's voice enabled device.
A preferred embodiment of the voice browser system and method uses a web site polling and ranking methodology that allows the system to detect changes in web sites and adapt to those changes in real-time. This enables the voice browser system of a preferred embodiment to deliver highly reliable information to users over any voice enabled device. This ranking system also enables the present invention to provide rapid responses to user requests. Long delays before receiving responses to requests are not tolerated by users of voice-based systems, such as telephones. When a user speaks into a telephone, an almost immediate response is expected. This expectation does not exist for non-voice communications, such as email transmissions or accessing a web site using a personal computer. In such situations, a reasonable amount of transmission delay is acceptable. The ranking system of implemented by a preferred embodiment of the present invention ensures users will always receive the fastest possible response to their request.
An alternative embodiment of the present invention allows users to control and monitor the operation of a variety of household devices connected to a network using speech commands spoken into a voice enabled device.
A first embodiment of the present invention is a system and method for allowing users to browse information sources, such as web sites, by using naturally spoken, conversational voice commands spoken into a voice enabled device. Users are not required to learn a special language or command set in order to communicate with the voice browsing system of the present invention. Common and ordinary commands and phrases are all that is required for a user to operate the voice browsing system. The voice browsing system recognizes naturally spoken voice commands and is speaker-independent; it does not have to be trained to recognize the voice patterns of each individual user. Such speech recognition systems use phonemes to recognize spoken words and not predefined voice patterns.
The first embodiment allows users to select from various categories of information and to search those categories for desired data by using conversational voice commands. The voice browsing system of the first preferred embodiment includes a user interface system referred to as a media server. The media server contains a speech recognition software engine. This speech recognition engine is used to recognize natural, conversational voice commands spoken by the user and converts them into data messages based on the available recognition grammar. These data messages are then sent to a network interface system. In the first preferred embodiment, the network interface system is referred to as a web browsing server. The web browsing server then accesses the appropriate information source, such as a web site, to gather information requested by the user.
Responses received from the information sources are then transferred to the media server where speech synthesis engine converts the responses into audio messages that are transmitted to the user. A more detailed description of this embodiment will now be provided.
Referring to
Table 1 below depicts two database records 200 that are used with the preferred embodiment. These records also contain a field indicating the “category” of the record, which is “weather” in each of these examples.
The database also contains a listing of pre-recorded audio files used to create concatenated phrases and sentences. Further, database 100 may contain customer profile information, system activity reports, and any other data or software servers necessary for the testing or administration of the voice browsing system.
The operation of the media servers 106 will now be discussed in relation to
The speech recognition function is performed by a speech recognition engine 300 that converts voice commands received from the user's voice enabled device 112 (i.e., any type of wireline or wireless telephone, Internet Protocol (IP) phones, or other special wireless units) into data messages. In the preferred embodiment, voice commands and audio messages are transmitted using the PSTN 116 and data is transmitted using the TCP/IP communications protocol. However, one skilled in the art would recognize that other transmission protocols may be used for either voice or data. Other possible transmission protocols would include SIP/VoIP (Session Initiation Protocol/Voice over IP), Asynchronous Transfer Mode (ATM) and Frame Relay. A preferred speech recognition engine is developed by Nuance Communications of 1380 Willow Road, Menlo Park, Calif. 94025 (www.nuance.com). The Nuance engine capacity is measured in recognition units based on CPU type as defined in the vendor specification. The natural speech recognition grammars (i.e., what a user can say that will be recognized by the speech recognition engine) were developed by Webley Systems.
Table 2 below provides a partial source code listing of the recognition grammars used by the speech recognition engine of the preferred embodiment for obtaining weather information.
The media server 106 uses recognition results generated by the speech recognition engine 300 to retrieve a web site record 200 stored in the database 100 that can provide the information requested by the user. The media server 106 processes the recognition result data identifying keywords that are used to search the web site records 200 contained in the database 100. For instance, if the user's request was “What is the weather in Chicago?”, the keywords “weather” and “Chicago” would be recognized. A web site record 200 with the highest rank number from the “weather” category within the database 100 would then be selected and transmitted to the web browsing server 102 along with an identifier indicating that Chicago weather is being requested.
The media servers 106 also contain a speech synthesis engine 302 that converts the data retrieved by the web browsing servers 102 into audio messages that are transmitted to the user's voice enabled device 112. A preferred speech synthesis engine is developed by Lernout and Hauspie Speech Products, 52 Third Avenue, Burlington, Mass. 01803 (www.lhsl.com).
A further description of the web browsing server 102 will be provided in relation to
Upon receiving a web site record 200 from the database 100 in response to a user request, the web browsing server 102 invokes the “content extraction agent” command 206 contained in the record 200. The content extraction agent 400 allows the web browsing server 102 to properly format requests and read responses provided by the web site 114 identified in the URL field 204 of the web site record 200. Each content extraction agent command 206 invokes the content extraction agent and identifies a content description file associated with the web page identified by the URL 204. This content description the directs the extraction agent where to extract data from the accessed web page and how to format a response to the user utilizing that data. For example, the content description for a web page providing weather information would indicate where to insert the “city” name or ZIP code in order to retrieve Chicago weather information. Additionally, the content description file for each supported URL indicates the location on the web page where the response information is provided. The extraction agent 400 uses this information to properly extract from the web page the information requested by the user.
Table 3 below contains source code for a content extraction agent 400 used by the preferred embodiment.
Table 4 below contains source code of the content fetcher 402 used with the content extraction agent 400 to retrieve information from a web site.
Table 5 below contains the content descriptor file source code for obtaining weather information from the web site www.cnn.com that is used by the extraction agent 400 of the preferred embodiment.
Table 6 below contains the content descriptor file source code for obtaining weather information from the web site www.lycos.com that is used by the extraction agent 400 of the preferred embodiment.
Once the web browsing server 102 accesses the web site specified in the URL 204 and retrieves the requested information, the information is forwarded to the media server 106. The media server uses the speech synthesis engine 302 to create an audio message that is then transmitted to the user's voice enabled device 112. In the preferred embodiment, each web browsing server 102 is based upon Intel's Dual Pentium III 730 MHz microprocessor system.
Referring to
As an example, if a user wishes to obtain restaurant information, he may speak into his telephone the phrase “yellow pages”. The FIR application would then ask the user what he would like to find and the user may respond by stating “restaurants”. The user may then be provided with further options related to searching for the desired restaurant. For instance, the user may be provided with the following restaurant options, “Mexican Restaurants”, “Italian Restaurants”, or “American Restaurants”. The user then speaks into the telephone 112 the restaurant type of interest. The IVR application running on the media server 106 may also request additional information limiting the geographic scope of the restaurants to be reported to the user. For instance, the IVR application may ask the user to identify the zip code of the area where the restaurant should be located. The media server 106 uses the speech recognition engine 300 to interpret the speech commands received from the user. Based upon these commands, the media server 106 retrieves the appropriate web site record 200 from the database 100. This record and any additional data, which may include other necessary parameters needed to perform the user's request, are transmitted to a web browsing server 102. A firewall 104 may be provided that separates the web browsing server 102 from the database 100 and media server 106. The firewall provides protection to the media server and database by preventing unauthorized access in the event the firewall for web browsing server 108 fails or is compromised. Any type of firewall protection technique commonly known to one skilled in the art could be used, including packet filter, proxy server, application gateway, or circuit-level gateway techniques.
The web browsing server 102 then uses the web site record and any additional data and executes the extraction agent 400 and relevant content descriptor file 406 to retrieve the requested information.
The information received from the responding web site 114 is then processed by the web browsing server 102 according to the content descriptor file 406 retrieval by the extraction agent. This processed response is then transmitted to the media server 106 for conversion into audio messages using either the speech synthesis software 302 or selecting among a database of prerecorded voice responses contained within the database 100.
As mentioned above, each web site record contains a rank number 202 as shown in
The web site ranking method and system of the present invention provides robustness to the voice browser system and enables it to adapt to changes that may occur as web sites evolve. For instance, the information required by a web site 114 to perform a search or the format of the reported response data may change. Without the ability to adequately monitor and detect these changes, a search requested by a user may provide an incomplete response, no response, or an error. Such useless responses may result from incomplete data being provided to the web site 114 or the web browsing server 102 being unable to recognize the response data messages received from the searched web site 114.
The robustness and reliability of the voice browsing system of the present invention is further improved by the addition of a polling mechanism. This polling mechanism continually polls or “pings” each of the sites listed in the database 100. During this polling function, a web browsing server 102 sends brief data requests or “polling digital data” to each web site listed in database 100. The web browsing server 102 monitors the response received from each web site and determines whether it is a complete response and whether the response is in the expected format specified by the content descriptor file 406 used by the extraction agent 400. The polled web sites that provide complete responses in the format expected by the extraction agent 400 have their ranking established based on their “response lime”. That is, web sites with faster response times will be will be assigned higher rankings than those with slower response times. If the web browsing server 102 receives no response from the polled web site or if the response received is not in the expected format, then the rank of that web site is lowered. Additionally, the web browsing server contains a warning mechanism that generates a warning message or alarm for the system administrator indicating that the specified web site has been modified or is not responsive and requires further review.
Since the web browsing servers 102 access web sites based upon their ranking number, only those web sites that produce useful and error-free responses will be used by the voice browser system to gather information requested by the user. Further, since the ranking numbers are also based upon the speed of a web site in providing responses, only the most time efficient sites are accessed. This system assures that users will get complete, timely, and relevant responses to their requests. Without this feature, users may be provided with information that is not relevant to their request or may not get any information at all. The constant polling and re-ranking of the web sites used within each category allows the voice browser of the present invention to operate efficiently. Finally, it allows the voice browser system of the present invention to dynamically adapt to changes in the rapidly evolving web sites that exist on the Internet.
It should be noted that the web sites accessible by the voice browser of the preferred embodiment may use any type of mark-up language, including Extensible Markup Language (XML), Wireless Markup Language (WML), Handheld Device Markup Language (HDML), Hyper Text Markup Language (HTML), or any variation of these languages.
A second embodiment of the present invention is depicted in
Each of these devices 500 is connected to a network 502. These devices 500 may contain embedded microprocessors or may be connected to other computer equipment that allow the device 500 to communicate with network 502. In the preferred embodiment, the devices 500 appear as “web sites” connected to the network 502. This allows a network interface system, such as a device browsing server 506, a database 508, and a user interface system, such as a media server 510, to operate similar to the web browsing server 102, database 100 and media server 106 described in the first preferred embodiment above. A network 502 interfaces with one or more network interface systems, which are shown as device browsing servers 506 in
Database 508 lists all devices that are connected to the network 502. For each device 500, the database 508 contains a record similar to that shown in
The content extraction agent operates similarly to that described in the first embodiment. A device descriptor file contains a listing of the options and functions available for each of the devices 500 connected on the network 502. Furthermore, the device descriptor file contains the information necessary to properly communicate with the networked devices 500. Such information would include, for example, communication protocols, message formatting requirements, and required operating parameters.
The device browsing server 506 receives messages from the various networked devices 500, appropriately formats those messages and transmits them to one or more media servers 510 which are part of the device browsing system. The user's voice enabled devices 504 can access the device browsing system by calling into a media server 510 via the Public Switched Telephone Network (PSTN) 512. In the preferred embodiment, the device browsing server is based upon Intel's Dual Pentium III 730 MHz microprocessor system.
The media servers 510 act as user interface systems and perform the functions of natural speech recognition, speech synthesis, data processing, and call handling. The media server 510 operates similarly to the media server 106 depicted in
First, a user may call into a media server 510 by dialing a telephone number associated with an established device browsing system. Once the user is connected, the IVR application of the media server 510 will provide the user with a list of available systems that may be monitored or controlled based upon information contained in database 508.
For example, the user may be provided with the option to select “Home Systems” or “Office Systems”. The user may then speak the command “access home systems”. The media server 510 would then access the database 508 and provide the user with a listing of the home subsystems or devices 500 available on the network 502 for the user to monitor and control. For instance, the user may be given a listing of subsystems such as “Outdoor Lighting System”, “Indoor Lighting System”, “Security System”, or “Heating and Air Conditioning System”. The user may then select the indoor lighting subsystem by speaking the command “Indoor Lighting System”. The IVR application would then provide the user with a set of options related to the indoor lighting system. For instance the media server 510 may then provide a listing such as “Dining Room”, “Living Room”, “Kitchen”, or “Bedroom”. After selecting the desired room, the IVR application would provide the user with the options to hear the “status” of the lighting in that room or to “turn on”, “turn off”, or “dim” the lighting in the desired room. These commands are provided by the user by speaking the desired command into the users voice enabled device 504. The media server 510 receives this command and translates it into a data message. This data message is then forwarded to the device browsing server 506 which routes the message to the appropriate device 500.
The device browsing system 514 of this embodiment of the present invention also provides the same robustness and reliability features described in the first embodiment. The device browsing system 514 has the ability to detect whether new devices have been added to the system or whether current devices are out-of-service. This robustness is achieved by periodically polling or “pinging” all devices 500 listed in database 508. The device browsing server 506 periodically polls each device 500 and monitors the response. If the device browsing server 506 receives a recognized and expected response from the polled device, then the device is categorized as being recognized and in-service. However, if the device browsing server 506 does not receive a response from the polled device 500 or receives an unexpected response, then the device 500 is marked as being either new or out-of-service. A warning message or a report may then be generated for the user indicating that a new device has been detected or that an existing device is experiencing trouble.
Therefore, this embodiment allows users to remotely monitor and control any devices that are connected to a network, such as devices within a home or office. Furthermore, no special telecommunications equipment is required for users to remotely access the device browser system. Users may use any type of voice enabled device (i.e., wireline or wireless telephones, IP phones, or other wireless units) available to them. Furthermore, a user may perform these functions from anywhere without having to subscribe to additional services. Therefore, no additional expenses are incurred by the user.
The descriptions of the preferred embodiments described above are set forth for illustrative purposes and are not intended to limit the present invention in any manner. Equivalent approaches are intended to be included within the scope of the present invention. While the present invention has been described with reference to the particular embodiments illustrated, those skilled in the art will recognize that many changes and variations may be made thereto without departing from the spirit and scope of the present invention. These embodiments and obvious variations thereof are contemplated as falling within the scope and spirit of the claimed invention.
This application is a continuation of U.S. patent application Ser. No. 11/409,703, filed Apr. 24, 2006, now allowed, which is a continuation of U.S. patent application Ser. No. 10/821,690, filed Apr. 9, 2004 and issued as U.S. Pat. No. 7,076,431 on Jul. 11, 2006, which is a continuation of U.S. patent application Ser. No. 09/776,996, filed Feb. 5, 2001 and issued as U.S. Pat. No. 6,721,705 on Apr. 13, 2004, which claims the benefit of priority to U.S. Provisional Application No. 60/180,344, filed Feb. 4, 2000, entitled “Voice-Activated Information Retrieval System” and U.S. Provisional Application No. 60/233,068, filed Sep. 15, 2000, entitled “Robust Voice Browser System and Voice Activated Device Controller”, all of which are herein incorporated by reference in their entirety.
Number | Name | Date | Kind |
---|---|---|---|
3728486 | Kraus | Apr 1973 | A |
4058838 | Crager et al. | Nov 1977 | A |
4100377 | Flanagan | Jul 1978 | A |
4313035 | Jordan et al. | Jan 1982 | A |
4327251 | Fomenko et al. | Apr 1982 | A |
4340783 | Sugiyama et al. | Jul 1982 | A |
4371752 | Matthews et al. | Feb 1983 | A |
4481574 | DeFino et al. | Nov 1984 | A |
4489438 | Hughes | Dec 1984 | A |
4500751 | Darland et al. | Feb 1985 | A |
4513390 | Walter et al. | Apr 1985 | A |
4523055 | Hohl et al. | Jun 1985 | A |
4549047 | Brian et al. | Oct 1985 | A |
4584434 | Hashimoto | Apr 1986 | A |
4585906 | Matthews et al. | Apr 1986 | A |
4596900 | Jackson | Jun 1986 | A |
4602129 | Matthews et al. | Jul 1986 | A |
4635253 | Urui et al. | Jan 1987 | A |
4652700 | Matthews et al. | Mar 1987 | A |
4696028 | Morganstein et al. | Sep 1987 | A |
4713837 | Gordon | Dec 1987 | A |
4747127 | Hansen et al. | May 1988 | A |
4748656 | Gibbs et al. | May 1988 | A |
4755932 | Diedrich | Jul 1988 | A |
4757525 | Matthews et al. | Jul 1988 | A |
4761807 | Matthews et al. | Aug 1988 | A |
4763317 | Lehman et al. | Aug 1988 | A |
4769719 | Endo | Sep 1988 | A |
4771425 | Baran et al. | Sep 1988 | A |
4776016 | Hansen | Oct 1988 | A |
4782517 | Bernardis et al. | Nov 1988 | A |
4792968 | Katz | Dec 1988 | A |
4799144 | Parruck et al. | Jan 1989 | A |
4809321 | Morganstein et al. | Feb 1989 | A |
4811381 | Woo et al. | Mar 1989 | A |
4837798 | Cohen et al. | Jun 1989 | A |
4847891 | Kotani | Jul 1989 | A |
4850012 | Mehta et al. | Jul 1989 | A |
4852149 | Zwick et al. | Jul 1989 | A |
4866758 | Heinzelmann | Sep 1989 | A |
4873719 | Reese | Oct 1989 | A |
4879743 | Burke et al. | Nov 1989 | A |
4893333 | Baran et al. | Jan 1990 | A |
4893335 | Fuller et al. | Jan 1990 | A |
4903289 | Hashimoto | Feb 1990 | A |
4905273 | Gordon et al. | Feb 1990 | A |
4907079 | Turner et al. | Mar 1990 | A |
4918722 | Duehren et al. | Apr 1990 | A |
4922518 | Gordon et al. | May 1990 | A |
4922520 | Bernard et al. | May 1990 | A |
4922526 | Morganstein et al. | May 1990 | A |
4926462 | Ladd et al. | May 1990 | A |
4930150 | Katz | May 1990 | A |
4933966 | Hird et al. | Jun 1990 | A |
4935955 | Neudorfer | Jun 1990 | A |
4935958 | Morganstein et al. | Jun 1990 | A |
4941170 | Herbst | Jul 1990 | A |
4942598 | Davis | Jul 1990 | A |
4953204 | Cuschleg, Jr. et al. | Aug 1990 | A |
4955047 | Morganstein et al. | Sep 1990 | A |
4956835 | Grover | Sep 1990 | A |
4959854 | Cave et al. | Sep 1990 | A |
4967288 | Mizutori et al. | Oct 1990 | A |
4969184 | Gordon et al. | Nov 1990 | A |
4972462 | Shibata | Nov 1990 | A |
4974254 | Perine et al. | Nov 1990 | A |
4975941 | Morganstein et al. | Dec 1990 | A |
4985913 | Shalom et al. | Jan 1991 | A |
4994926 | Gordon et al. | Feb 1991 | A |
4996704 | Brunson | Feb 1991 | A |
5003575 | Chamberlin et al. | Mar 1991 | A |
5003577 | Ertz et al. | Mar 1991 | A |
5008926 | Misholi | Apr 1991 | A |
5020095 | Morganstein et al. | May 1991 | A |
5027384 | Morganstein | Jun 1991 | A |
5029196 | Morganstein | Jul 1991 | A |
5036533 | Carter et al. | Jul 1991 | A |
5054054 | Pessia et al. | Oct 1991 | A |
5065254 | Hishida | Nov 1991 | A |
5086385 | Launey et al. | Feb 1992 | A |
5095445 | Sekiguchi | Mar 1992 | A |
5099509 | Morganstein et al. | Mar 1992 | A |
5109405 | Morganstein | Apr 1992 | A |
5128984 | Katz | Jul 1992 | A |
5131024 | Pugh et al. | Jul 1992 | A |
5133004 | Heileman, Jr. et al. | Jul 1992 | A |
5145452 | Chevalier | Sep 1992 | A |
5166974 | Morganstein et al. | Nov 1992 | A |
5179585 | MacMillan, Jr. et al. | Jan 1993 | A |
5193110 | Jones et al. | Mar 1993 | A |
5195086 | Baumgartner et al. | Mar 1993 | A |
5233600 | Pekarske | Aug 1993 | A |
5243643 | Sattar et al. | Sep 1993 | A |
5243645 | Bissell et al. | Sep 1993 | A |
5249219 | Morganstein et al. | Sep 1993 | A |
5263084 | Chaput et al. | Nov 1993 | A |
5291302 | Gordon et al. | Mar 1994 | A |
5291479 | Vaziri et al. | Mar 1994 | A |
5303298 | Morganstein et al. | Apr 1994 | A |
5307399 | Dai et al. | Apr 1994 | A |
5309504 | Morganstein | May 1994 | A |
5325421 | Hou et al. | Jun 1994 | A |
5327486 | Wolff et al. | Jul 1994 | A |
5327529 | Fults et al. | Jul 1994 | A |
5329578 | Brennan et al. | Jul 1994 | A |
5333266 | Boaz et al. | Jul 1994 | A |
5347574 | Morganstein | Sep 1994 | A |
5355403 | Richardson, Jr. et al. | Oct 1994 | A |
5365524 | Hiller et al. | Nov 1994 | A |
5375161 | Fuller et al. | Dec 1994 | A |
5384771 | Isidoro et al. | Jan 1995 | A |
5404231 | Bloomfield | Apr 1995 | A |
5408526 | McFarland et al. | Apr 1995 | A |
5414754 | Pugh et al. | May 1995 | A |
5416834 | Bales et al. | May 1995 | A |
5432845 | Burd et al. | Jul 1995 | A |
5436963 | Fitzpatrick et al. | Jul 1995 | A |
5459584 | Gordon et al. | Oct 1995 | A |
5463684 | Morduch et al. | Oct 1995 | A |
5475791 | Schalk et al. | Dec 1995 | A |
5479487 | Hammond | Dec 1995 | A |
5495484 | Self et al. | Feb 1996 | A |
5497373 | Hulen et al. | Mar 1996 | A |
5499288 | Hunt et al. | Mar 1996 | A |
5515427 | Carlsen et al. | May 1996 | A |
5517558 | Schalk | May 1996 | A |
5526353 | Henley et al. | Jun 1996 | A |
5533115 | Hollenbach et al. | Jul 1996 | A |
5537461 | Bridges et al. | Jul 1996 | A |
5555100 | Bloomfield et al. | Sep 1996 | A |
5559611 | Bloomfield et al. | Sep 1996 | A |
5559859 | Dai et al. | Sep 1996 | A |
5566236 | MeLampy et al. | Oct 1996 | A |
5603031 | White et al. | Feb 1997 | A |
5608786 | Gordon | Mar 1997 | A |
5610910 | Focsaneanu et al. | Mar 1997 | A |
5610970 | Fuller et al. | Mar 1997 | A |
5611031 | Hertzfeld et al. | Mar 1997 | A |
5652789 | Miner et al. | Jul 1997 | A |
5657376 | Espeut et al. | Aug 1997 | A |
5659597 | Bareis et al. | Aug 1997 | A |
5666401 | Morganstein et al. | Sep 1997 | A |
5675507 | Bobo, II | Oct 1997 | A |
5675811 | Broedner et al. | Oct 1997 | A |
5689669 | Lynch et al. | Nov 1997 | A |
5692187 | Goldman et al. | Nov 1997 | A |
5699486 | Tullis et al. | Dec 1997 | A |
5712903 | Bartholomew et al. | Jan 1998 | A |
5719921 | Vysotsky et al. | Feb 1998 | A |
5721908 | Lagarde et al. | Feb 1998 | A |
5724408 | Morganstein | Mar 1998 | A |
5742905 | Pepe et al. | Apr 1998 | A |
5752191 | Fuller et al. | May 1998 | A |
5764639 | Staples et al. | Jun 1998 | A |
5764736 | Shachar et al. | Jun 1998 | A |
5764910 | Shachar | Jun 1998 | A |
5787298 | Broedner et al. | Jul 1998 | A |
5793993 | Broedner et al. | Aug 1998 | A |
5799065 | Junqua et al. | Aug 1998 | A |
5809282 | Cooper et al. | Sep 1998 | A |
5812796 | Broedner et al. | Sep 1998 | A |
5819220 | Sarukkai et al. | Oct 1998 | A |
5819306 | Goldman et al. | Oct 1998 | A |
5822727 | Garberg et al. | Oct 1998 | A |
5832063 | Vysotsky et al. | Nov 1998 | A |
5835570 | Wattenbarger | Nov 1998 | A |
5838682 | Dekelbaum et al. | Nov 1998 | A |
5867494 | Krishnaswamy et al. | Feb 1999 | A |
5867495 | Elliott et al. | Feb 1999 | A |
5873080 | Coden et al. | Feb 1999 | A |
5881134 | Foster et al. | Mar 1999 | A |
5884032 | Bateman et al. | Mar 1999 | A |
5884262 | Wise et al. | Mar 1999 | A |
5884266 | Dvorak | Mar 1999 | A |
5890123 | Brown et al. | Mar 1999 | A |
5915001 | Uppaluru | Jun 1999 | A |
5917817 | Dunn et al. | Jun 1999 | A |
5943399 | Bannister et al. | Aug 1999 | A |
5946389 | Dold | Aug 1999 | A |
5953392 | Rhie et al. | Sep 1999 | A |
5974413 | Beauregard et al. | Oct 1999 | A |
5999525 | Krishnaswamy et al. | Dec 1999 | A |
6012088 | Li et al. | Jan 2000 | A |
6014437 | Acker et al. | Jan 2000 | A |
6018710 | Wynblatt et al. | Jan 2000 | A |
6021181 | Miner et al. | Feb 2000 | A |
6031904 | An et al. | Feb 2000 | A |
6038305 | McAllister et al. | Mar 2000 | A |
6047053 | Miner et al. | Apr 2000 | A |
6067516 | Levay et al. | May 2000 | A |
6078580 | Mandalia et al. | Jun 2000 | A |
6081518 | Bowman-Amuah | Jun 2000 | A |
6081782 | Rabin | Jun 2000 | A |
6091808 | Wood et al. | Jul 2000 | A |
6101472 | Giangarra et al. | Aug 2000 | A |
6104803 | Weser et al. | Aug 2000 | A |
6115742 | Franklin et al. | Sep 2000 | A |
6130933 | Miloslavsky | Oct 2000 | A |
6157705 | Perrone | Dec 2000 | A |
6185535 | Hedin et al. | Feb 2001 | B1 |
6195357 | Polcyn | Feb 2001 | B1 |
6208638 | Rieley et al. | Mar 2001 | B1 |
6230132 | Class et al. | May 2001 | B1 |
6233318 | Picard et al. | May 2001 | B1 |
6243373 | Turock | Jun 2001 | B1 |
6252944 | Hansen, II et al. | Jun 2001 | B1 |
6269336 | Ladd et al. | Jul 2001 | B1 |
6285745 | Bartholomew et al. | Sep 2001 | B1 |
6327572 | Morton et al. | Dec 2001 | B1 |
6349132 | Wesemann et al. | Feb 2002 | B1 |
6353661 | Bailey, III | Mar 2002 | B1 |
6366578 | Johnson | Apr 2002 | B1 |
6424945 | Sorsa | Jul 2002 | B1 |
6434529 | Walker et al. | Aug 2002 | B1 |
6446076 | Burkey et al. | Sep 2002 | B1 |
6456699 | Burg et al. | Sep 2002 | B1 |
6477420 | Struble et al. | Nov 2002 | B1 |
6501966 | Bareis et al. | Dec 2002 | B1 |
6505163 | Zhang et al. | Jan 2003 | B1 |
6529948 | Bowman-Amuah | Mar 2003 | B1 |
6532444 | Weber | Mar 2003 | B1 |
6539359 | Ladd et al. | Mar 2003 | B1 |
6546393 | Khan | Apr 2003 | B1 |
6584439 | Geilhufe et al. | Jun 2003 | B1 |
6594348 | Bjurstrom et al. | Jul 2003 | B1 |
6618726 | Colbath et al. | Sep 2003 | B1 |
6636831 | Profit, Jr. et al. | Oct 2003 | B1 |
6665640 | Bennett et al. | Dec 2003 | B1 |
6687341 | Koch et al. | Feb 2004 | B1 |
6718015 | Berstis | Apr 2004 | B1 |
6721705 | Kurganov et al. | Apr 2004 | B2 |
6732142 | Bates et al. | May 2004 | B1 |
6771732 | Xiao et al. | Aug 2004 | B2 |
6775264 | Kurganov | Aug 2004 | B1 |
6807257 | Kurganov | Oct 2004 | B1 |
6823370 | Kredo et al. | Nov 2004 | B1 |
6888929 | Saylor et al. | May 2005 | B1 |
6922733 | Kuiken et al. | Jul 2005 | B1 |
6941273 | Loghmani et al. | Sep 2005 | B1 |
6964012 | Zirngibl et al. | Nov 2005 | B1 |
6964023 | Maes et al. | Nov 2005 | B2 |
6965864 | Thrift et al. | Nov 2005 | B1 |
6996609 | Hickman et al. | Feb 2006 | B2 |
7003463 | Maes et al. | Feb 2006 | B1 |
7050977 | Bennett | May 2006 | B1 |
7076431 | Kurganov et al. | Jul 2006 | B2 |
7386455 | Kurganov et al. | Jun 2008 | B2 |
7516190 | Kurganov | Apr 2009 | B2 |
20010032234 | Summers et al. | Oct 2001 | A1 |
20010040885 | Jonas et al. | Nov 2001 | A1 |
20010048676 | Jimenez et al. | Dec 2001 | A1 |
20020006126 | Johnson et al. | Jan 2002 | A1 |
Number | Date | Country |
---|---|---|
1329852 | May 1994 | CA |
0 572 544 | Sep 1996 | EP |
2 211 698 | Jul 1989 | GB |
2 240 693 | Aug 1991 | GB |
2 317 782 | Jan 1998 | GB |
1-258526 | Oct 1989 | JP |
WO 9107838 | May 1991 | WO |
WO 9118466 | Nov 1991 | WO |
WO 9609710 | Mar 1996 | WO |
WO 9737481 | Oct 1997 | WO |
WO 9823058 | May 1998 | WO |
Number | Date | Country | |
---|---|---|---|
20080189113 A1 | Aug 2008 | US |
Number | Date | Country | |
---|---|---|---|
60233068 | Sep 2000 | US | |
60180344 | Feb 2000 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 11409703 | Apr 2006 | US |
Child | 12030556 | US | |
Parent | 10821690 | Apr 2004 | US |
Child | 11409703 | US | |
Parent | 09776996 | Feb 2001 | US |
Child | 10821690 | US |