As communication technologies have improved, businesses and individuals have desired greater functionality in their communication networks. As a nonlimiting example, many businesses have created call center infrastructures, in which a customer or other user can call to receive information related to the business. As customers call into the call center, the customer may be connected with a customer service representative to provide desired information. Depending on the time of call, the subject matter of the call, and/or other information, the customer may be connected with different customer service representatives. As such, depending on these and/or other factors, the customer may be provided with varying levels of quality with respect to the interaction with a customer service representative. Because most businesses desire to provide the highest possible quality of customer service, many businesses have turned to recording the communication between the customer and the customer service representative. While recording this data has proven beneficial in many cases, many businesses receive call volumes that inhibit the business from reviewing all of the call data received.
As such, many businesses have turned to speech recognition technology to capture the recorded communication data and thereby provide a textual document for review of the communication. While textual documentation of a communication has also proven beneficial, a similar scenario may exist, in that the sheer amount of data may be such that review of the data is impractical. To combat this problem, a number of businesses have also implemented speech analytics technologies to analyze the speech recognized communications. One such technology that has emerged includes large vocabulary continuous speech recognition (LVCSR). LVCSR technologies often convert received audio from the communications into an English translation of the communication in a textual document. From the textual document, analytics may be provided to determine various data related to the communication.
While LVCSR technologies have improved the ability to analyze captured data, LVCSR technology often consumes a large amount of resources in converting the audio data into a textual format and/or analyzing the textual data. As such, phonetic speech to text technologies have also emerged. While phonetic speech to text technologies provide analytic functionality, many of the features that may be provided in an LVCSR type speech to text technology may be unavailable.
Thus, a heretofore unaddressed need exists in the industry to address the aforementioned deficiencies and inadequacies.
Included are embodiments for providing speech analysis. At least one embodiment of a method includes receiving audio data associated with a communication and providing at least one phoneme in a phonetic transcript, the phonetic transcript including at least one character from a phonetic alphabet.
Also included are embodiments of a system for providing speech analysis. At least one embodiment of a system includes an audio receiving component configured to receive audio data associated with a communication and a providing component configured to provide at least one phoneme in a phonetic transcript, the phonetic transcript including at least one character from a phonetic alphabet.
Also included are embodiments of a computer readable medium for providing speech analysis. At least one embodiment of a computer readable medium includes audio receiving logic configured to receive audio data associated with a communication and providing logic configured to provide at least one phoneme in a phonetic transcript, the phonetic transcript including at least one character from a phonetic alphabet.
Other systems, methods, features, and advantages of this disclosure will be or become apparent to one with skill in the art upon examination of the following drawings and detailed description. It is intended that all such additional systems, methods, features, and advantages be included within this description, be within the scope of the present disclosure.
Many aspects of the disclosure can be better understood with reference to the following drawings. The components in the drawings are not necessarily to scale, emphasis instead being placed upon clearly illustrating the principles of the present disclosure. Moreover, in the drawings, like reference numerals designate corresponding parts throughout the several views. While several embodiments are described in connection with these drawings, there is no intent to limit the disclosure to the embodiment or embodiments disclosed herein. On the contrary, the intent is to cover all alternatives, modifications, and equivalents.
Included are techniques for using a phonetic speech recognition engine to produce a phonetic transcript of a communication. The phonetic transcript may include information about the sounds that occurred in speech without attempting to reconstruct these “phonemes” into syllables and words. Additionally at least one exemplary embodiment includes the phonetic transcript as an index file. The index file can be configured for easy searching of phonemes and combinations of phonemes.
Regardless, this data can then be used as an input to an automated scoring system that can learn to spot patterns that identify various data related to the communication, including one or more components for determining quality of customer service. In at least one embodiment, the scoring system can make this determination based on samples associated with previous scores. Thus, processing of all calls in a call center with relatively little custom configuration can be achieved, as well as adaptation over time without requiring extensive reconfiguration.
One should note that a call center can include, but is not limited to, outsourced contact centers, outsourced customer relationship management, customer relationship management, voice of the customer, customer interaction, contact center, multi-media contact center, remote office, distributed enterprise, work-at-home agents, remote agents, branch office, back office, performance optimization, workforce optimization, hosted contact centers, and speech analytics, for example.
More specifically, recording and scoring calls (by various call-center-specific criteria) is a tool that may be used by call centers to monitor the quality of customer service. While, in at least one embodiment, scoring may largely be calculated manually, for some configurations, automated scoring may be desired. Automation can be performed by configuring a set of rules into a scoring system and applying the rules to calls automatically.
Additionally, some embodiments include an automatic learning component, which may be configured to search for and identify patterns that can then be used to score future calls. The automatic learning component may receive as much relevant data as possible about the call (e.g., telephony events, information from Computer Telephone Integration (CTI) logic, such as customer ID, and data such as key presses recorded from the agent's computing device and/or communications device).
Once the automatic learning component has analyzed enough calls to generate some useful patterns, the automatic learning component can apply patterns to new calls for automatically scoring. Further, manually scored calls can be sent to the scoring engine to help the patterns adjust over time, or when scoring requirements change.
In at least one embodiment, a phonetic engine may be configured to preprocess (“ingest”) raw audio data and produce a summarized form of the audio data, which includes phonetic data. The raw data and/or the phonetic summary, however, may be impractical to use in an automatic learning system. Oftentimes the raw audio data and/or the phonetic summary may include too much data, including unwanted noise. Additionally, oftentimes, the phonetic summary may be created in a proprietary format, specific to a particular phonetic engine.
Included in this disclosure is a description of a phonetic transcript. A phonetic transcript is a simple text file containing a list of the individual speech sounds (phonemes) that occurred in a particular communication. One way to represent this data includes utilization of the International Phonetic Alphabet (IPA), which can be encoded for computer use using the ISO10646 standard (Unicode). As a nonlimiting example, a British pronunciation of: “the quick brown fox jumps over the lazy dog” may be represented as:
@ kw
An extended form of the phonetic transcript could add a time stamp in the recording to indicate a time that one or more phonemes occur. Some embodiments may also include the ability to specify multiple possible phonemes for each actual phoneme with confidence levels indicating how close a match there is between the phoneme in the recording and the phoneme as it would normally be expected to appear. One embodiment, among others includes producing an XML file using a simple schema for this data. These and other embodiments are described below with reference to the drawings.
As discussed above, in some configurations, a recording can be provided to a customer service representative (agent) to determine the quality of customer service. Similarly, some embodiments may include a text to voice conversion of the communication. LVCSR speech recognition may be configured to create an English (and/or other spoken language) translated textual document associated with the communication. While an LVCSR speech recognized textual document may provide enhanced searching capabilities related to the communication, LVCSR technologies may be slow and difficult to produce. Similarly, many phonetic technologies for speech recognition may be difficult to utilize one or more search functions associated with the communication.
Additionally, while a user can send a communication request via communication device 104, some embodiments may provide that a user utilizing computing device 108 may initiate a communication to call center 106 via network 100. In such configurations, a user may be utilizing a soft phone and/or other communications logic provided for initiating and facilitating a communication.
Call center 106 may also include an analytic scorecard 220, a quality management (QM) evaluations component 222, and enterprise reporting component 224, and a speech and replay component 226. An agent 228 (such as a customer service representative) can utilize one or more of the components of call center 106 to facilitate a communication with a caller on communications device 104. Similarly, an analyst 230 can utilize one or more components of call center 106 to analyze the quality of the communications between the agent 228 and the caller associated with communications device 104. A supervisor 232 may also have access to components of call center 106 to oversee the agent 228 and/or the analyst 230 and their interactions with a caller on communications device 104.
Additionally, a recognition engine cluster 202 may be coupled to call center 106 either directly and/or via network 100. Recognition engine cluster 202 may include one or more servers that may provide speech recognition functionality to call center 106.
In operation, a communication between a caller on communications device 104 and an agent 228 via network 100 may first be received by a recorder subsystem component 204. Recorder subsystem component 204 may record the communications in an audio format. The recorded audio may then be sent to an extraction filtering component 206, which may be configured to extract the dialogue (e.g., remove noise and other unwanted sounds) from the recording. The recorded communication can then be sent to a speech processing framework component 208 for converting the recorded audio communication into a textual format. Conversion of the audio into a textual format may be facilitated by a recognition engine cluster 202, however this is not a requirement. Regardless, conversion from the audio format to a textual format may be facilitated via LVCSR speech recognition technologies and/or phonetic speech recognition technologies, as discussed in more detail below.
Upon conversion from audio to a textual format, data related to the communication may be provided to advanced data analytics (pattern recognition) component 218. Advanced data analytics component 218 may be converted to provide analysis associated with the speech to text converted communication to determine the quality of customer service provided to the caller of communications device 104. Advanced data analytics component 218 may utilize atlas component 210 for facilitation of this analysis. More specifically, atlas component 210 may include a speech package component 212 that may be configured to analyze various patterns in the speech of the caller of communications device 104. Similarly, desktop event component 214 may be configured to analyze one or more actions that the user of communications device takes on their communications device 104. More specifically, a network 100 may facilitate communications in an IP network. As such, communications device 104 may facilitate both audio and/or data communications that may include audio, video, image, and/or other data. Additionally, advanced data analytics component 218 may utilize an interactions package 216 to determine various components of the interaction between agent 228 and the caller of communications device 104. Advanced data analytics component 218 may then make a determination based on predetermined criteria of the quality of call service provided by agent 228.
Advanced data analytics component 218 may then facilitate creation of an analytic scorecard 220 and provide enterprise reporting 224, as well as quality management evaluations 222 and speech and replay data 226. At least a portion of this data may be viewed by an agent 228, an analyst 230, and/or a supervisor 232. Additionally, as discussed in more detail below, an analyst 230 may further analyze the data to provide a basis for advanced data analytics component 218 to determine the quality of customer service.
The processor 382 can be any custom made or commercially available processor, a central processing unit (CPU), an auxiliary processor among several processors associated with the computing device 104, a semiconductor based microprocessor (in the form of a microchip or chip set), a macroprocessor, or generally any device for executing software instructions.
The volatile and nonvolatile memory 384 can include any one or combination of volatile memory elements (e.g., random access memory (RAM, such as DRAM, SRAM, SDRAM, VRAM, etc.)) and nonvolatile memory elements (e.g., ROM, hard drive, tape, CD-ROM, etc.). Moreover, the memory 384 may incorporate electronic, magnetic, optical, and/or other types of storage media. Note that the volatile and nonvolatile memory 384 can also have a distributed architecture, where various components are situated remotely from one another, but can be accessed by the processor 382.
The software in volatile and nonvolatile memory 384 may include one or more separate programs, each of which includes an ordered listing of executable instructions for implementing logical functions. In the example of
A system component embodied as software may also be construed as a source program, executable program (object code), script, or any other entity comprising a set of instructions to be performed. When constructed as a source program, the program is translated via a compiler, assembler, interpreter, or the like, which may or may not be included within the volatile and nonvolatile memory 384, so as to operate properly in connection with the Operating System 386.
The Input/Output devices that may be coupled to system I/O Interface(s) 396 may include input devices, for example but not limited to, a keyboard, mouse, scanner, microphone, camera, proximity device, etc. Further, the Input/Output devices may also include output devices, for example but not limited to, a printer, display, etc. Finally, the Input/Output devices may further include devices that communicate both as inputs and outputs, for instance but not limited to, a modulator/demodulator (modem for accessing another device, system, or network), a radio frequency (RF) or other transceiver, a telephonic interface, a bridge, a router, etc. Similarly, network interface, which is coupled to local interface 392 can be configured to communication with a communications network, such as the network from
If the computing device 108 is a personal computer, workstation, or the like, the software in the volatile and nonvolatile memory 384 may further include a basic input output system (BIOS) (omitted for simplicity). The BIOS is a set of software routines that initialize and test hardware at startup, start the Operating System 386, and support the transfer of data among the hardware devices. The BIOS is stored in ROM so that the BIOS can be executed when the computing device 108 is activated.
When the computing device 108 is in operation, the processor 382 can be configured to execute software stored within the volatile and nonvolatile memory 384, to communicate data to and from the volatile and nonvolatile memory 384, and to generally control operations of the computing device 108 pursuant to the software. Software in memory, in whole or in part, is read by the processor 382, perhaps buffered within the processor 382, and then executed. Additionally, one should note that while the above description is directed to a computing device 108, other devices (such as application server, capture control server, and central recording system) can also include the components described in
One should note that advanced data analytics component 218 can be configured with one or more of the components and/or logic described above with respect to analytics component 218. Additionally, analytics component 218, communications device 104, computing device 108, and/or other components of call center 106 can include voice recognition logic, voice-to-text logic, text-to-voice logic, etc. (or any permutation thereof), as well as other components and/or logic for facilitating the functionality described herein. Additionally, in some exemplary embodiments, one or more of these components can include the functionality described with respect to analytics component 218.
More specifically, ingest audio component 408 can be configured to facilitate the creation of a phonetic transcript with one or more phonemes that occur in the communication. One embodiment of a representation of the one or more phonemes can include the designation of International Phonetic Alphabet (IPA) which may be encoded for computer use using the ISO10646 standard (UNICODE). Ingest audio component 408 can then create the phonetic transcript 412.
Phonetic transcript 412 can then be sent to a search system 420 which is part of a search component 416. Search system 416 can also receive vocabulary and rules as designated by an analyst, such as analyst 230 from
As a nonlimiting example, referring to
The phonetic transcript can then be sent to a search component 416, which includes a search system 420. The search system 420 can utilize vocabulary and rules component 418, as well as receive the search terms 414. As indicated above, in this nonlimiting example, the search term “brown fox” can be a desired term to be found in a communication. The search system 420 can then search the phonetic transcript for phonemes related to the term “brown fox.” As the phonetic transcript may not include an English translation of the audio recording, vocabulary and rules component 418 may be configured to provide a correlation between the search term 414 (which may be provided in English) and the phonetic representation of the desired search terms, which may include one or more phonemes.
If phonemes associated with the term “brown fox” appear in the phonetic transcript 412, a signal and/or scorecard can be provided to an analyst 230 for determining the quality of customer service provided by agent 228. Additionally, some embodiments can be configured to provide information to analyst 230 in the event that phonemes associated with the term “brown fox” does not appear in the communication. Similarly, other search terms and/or search criteria may be utilized to provide data to analyst 230.
Similarly, ingestion component 510 receives raw data 502 (which may include an audio recording of at least a portion of the communication) and convert the raw data into a phonetic transcript, as discussed above. The phonetic transcript can be provided to automatic scoring system 508. Automatic scoring system 508 can be configured to determine scoring patterns from the analyst 230 by applying the phonetic transcript to the manual score. More specifically, the automatic scoring component can determine a technique used by agent 228 in determining the manual score 506. The automatic scoring component can then create a scoring patterns document 512 that can be sent to automatic scoring component 518.
Similarly, on the scoring side of
Upon ingesting the raw data, ingestion component 516 can send the phonetic transcript to automatic scoring system 518 (which may or may not be different than automatic scoring system 508). Automatic scoring system 518 can be configured to receive the phonetic transcript as well as scoring patterns 512. Automatic scoring system can then determine a score for raw data 502 according to the scoring patterns 512. Automatic scoring system can then send the scoring patterns to create an automatic score 520 associated with the communication.
One should note that while raw data 502 on the scoring side of
As discussed above, the textual phonetic transcript may be configured such that searching functionality may be performed. Similarly, depending on the particular embodiment, the textual phonetic transcript may be configured to determine unknown terms (e.g., phonemes) associated with the communication. More specifically, with the textual phonetic transcript, call center 106 may be configured to search the textual phonetic transcript to determine if a phoneme, a word, and/or a phrase are repeated in one or more communications. Call center 106 may be previously unaware of the phoneme, word, and/or phrase, however upon seeing the phoneme, word, and/or phrase in one or more communications, call center 106 may provide information associated with the phoneme, word, and/or phrase to agent 228.
Call center 106 may receive a manual score associated with the present communication (block 938). Call center 106 may then compare an automatic score with the received manual score to determine whether the automatic score includes one or more errors (block 940). As discussed above, this statistical pattern recognition allows call center 106 to learn not only patterns associated with scoring a communication, but also to determine accuracy data associated with scoring of the current communication.
One should also note that the above description could also include a hybrid system for recognizing and indexing speech. More specifically, in at least one exemplary embodiment, LVCSR may be utilized for word spotting and for short word detection. The phonetic transcript may be utilized for general searching. Other embodiments are also considered.
It should be noted that speech analytics (i.e., the analysis of recorded speech or real-time speech) can be used to perform a variety of functions, such as automated call evaluation, call scoring, quality monitoring, quality assessment and compliance/adherence. By way of example, speech analytics can be used to compare a recorded interaction to a script (e.g., a script that the agent was to use during the interaction). In other words, speech analytics can be used to measure how well agents adhere to scripts, identify which agents are “good” sales people and which ones need additional training. As such, speech analytics can be used to find agents who do not adhere to scripts. Yet in another example, speech analytics can measure script effectiveness, identify which scripts are effective and which are not, and find, for example, the section of a script that displeases or upsets customers (e.g., based on emotion detection). As another example, compliance with various policies can be determined. Such may be in the case of, for example, the collections industry where it is a highly regulated business and agents must abide by many rules. The speech analytics of the present disclosure may identify when agents are not adhering to their scripts and guidelines. This improves collection effectiveness and reduces corporate liability and risk.
In this regard, various types of recording components can be used to facilitate speech analytics. Specifically, such recording components can perform one or more various functions such as receiving, capturing, intercepting and tapping of data. This can involve the use of active and/or passive recording techniques, as well as the recording of voice and/or screen data.
It should be noted that speech analytics can be used in conjunction with such screen data (e.g., screen data captured from an agent's workstation/PC) for evaluation, scoring, analysis, adherence and compliance purposes, for example. Such integrated functionalities improve the effectiveness and efficiency of, for example, quality assurance programs. For example, the integrated function can help companies to locate appropriate calls (and related screen interactions) for quality monitoring and evaluation. This type of “precision” monitoring improves the effectiveness and productivity of quality assurance programs.
Another aspect that can be accomplished involves fraud detection. In this regard, various manners can be used to determine the identity of a particular speaker. In some embodiments, speech analytics can be used independently and/or in combination with other techniques for performing fraud detection. Specifically, some embodiments can involve identification of a speaker (e.g., a customer) and correlating this identification with other information to determine whether a fraudulent claim for example is being made. If such potential fraud is identified, some embodiments can provide an alert. For example, the speech analytics of the present disclosure may identify the emotions of callers. The identified emotions can be used in conjunction with identifying specific concepts to help companies spot either agents or callers/customers who are involved in fraudulent activities. Referring back to the collections example outlined above, by using emotion and concept detection, companies can identify which customers are attempting to mislead collectors into believing that they are going to pay. The earlier the company is aware of a problem account, the more recourse options they will have. Thus, the speech analytics of the present disclosure can function as an early warning system to reduce losses.
Additionally, included in this disclosure are embodiments of integrated workforce optimization platforms, as discussed in U.S. application Ser. No. 11/359,356, filed on Feb. 22, 2006, entitled “Systems and Methods for Workforce Optimization,” which is hereby incorporated by reference in its entirety. At least one embodiment of an integrated workforce optimization platform integrates: (1) Quality Monitoring/Call Recording—voice of the customer; the complete customer experience across multimedia touch points; (2) Workforce Management—strategic forecasting and scheduling that drives efficiency and adherence, aids in planning, and helps facilitate optimum staffing and service levels; (3) Performance Management—key performance indicators (KPIs) and scorecards that analyze and help identify synergies, opportunities and improvement areas; (4) e-Learning—training, new information and protocol disseminated to staff, leveraging best practice customer interactions and delivering learning to support development; and/or (5) Analytics—deliver insights from customer interactions to drive business performance. By way of example, the integrated workforce optimization process and system can include planning and establishing goals—from both an enterprise and center perspective—to ensure alignment and objectives that complement and support one another. Such planning may be complemented with forecasting and scheduling of the workforce to ensure optimum service levels. Recording and measuring performance may also be utilized, leveraging quality monitoring/call recording to assess service quality and the customer experience.
The embodiments disclosed herein can be implemented in hardware, software, firmware, or a combination thereof. At least one embodiment, disclosed herein is implemented in software and/or firmware that is stored in a memory and that is executed by a suitable instruction execution system. If implemented in hardware, as in an alternative embodiments disclosed herein can be implemented with any or a combination of the following technologies: a discrete logic circuit(s) having logic gates for implementing logic functions upon data signals, an application specific integrated circuit (ASIC) having appropriate combinational logic gates, a programmable gate array(s) (PGA), a field programmable gate array (FPGA), etc.
One should note that the flowcharts included herein show the architecture, functionality, and operation of a possible implementation of software. In this regard, each block can be interpreted to represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that in some alternative implementations, the functions noted in the blocks may occur out of the order and/or not at all. For example, two blocks shown in succession may in fact be executed substantially concurrently or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.
One should note that any of the programs listed herein, which can include an ordered listing of executable instructions for implementing logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. In the context of this document, a “computer-readable medium” can be any means that can contain, store, communicate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. The computer readable medium can be, for example but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device. More specific examples (a nonexhaustive list) of the computer-readable medium could include an electrical connection (electronic) having one or more wires, a portable computer diskette (magnetic), a random access memory (RAM) (electronic), a read-only memory (ROM) (electronic), an erasable programmable read-only memory (EPROM or Flash memory) (electronic), an optical fiber (optical), and a portable compact disc read-only memory (CDROM) (optical). In addition, the scope of the certain embodiments of this disclosure can include embodying the functionality described in logic embodied in hardware or software-configured mediums.
One should also note that conditional language, such as, among others, “can,” “could,” “might,” or “may,” unless specifically stated otherwise, or otherwise understood within the context as used, is generally intended to convey that certain embodiments include, while other embodiments do not include, certain features, elements and/or steps. Thus, such conditional language is not generally intended to imply that features, elements and/or steps are in any way required for one or more particular embodiments or that one or more particular embodiments necessarily include logic for deciding, with or without user input or prompting, whether these features, elements and/or steps are included or are to be performed in any particular embodiment.
It should be emphasized that the above-described embodiments are merely possible examples of implementations, merely set forth for a clear understanding of the principles of this disclosure. Many variations and modifications may be made to the above-described embodiments without departing substantially from the spirit and principles of the disclosure. All such modifications and variations are intended to be included herein within the scope of this disclosure.
Number | Name | Date | Kind |
---|---|---|---|
3594919 | De Bell et al. | Jul 1971 | A |
3705271 | De Bell et al. | Dec 1972 | A |
4510351 | Costello et al. | Apr 1985 | A |
4684349 | Ferguson et al. | Aug 1987 | A |
4694483 | Cheung | Sep 1987 | A |
4763353 | Canale et al. | Aug 1988 | A |
4815120 | Kosich | Mar 1989 | A |
4924488 | Kosich | May 1990 | A |
4953159 | Hayden et al. | Aug 1990 | A |
5016272 | Stubbs et al. | May 1991 | A |
5101402 | Chiu et al. | Mar 1992 | A |
5117225 | Wang | May 1992 | A |
5210789 | Jeffus et al. | May 1993 | A |
5239460 | LaRoche | Aug 1993 | A |
5241625 | Epard et al. | Aug 1993 | A |
5267865 | Lee et al. | Dec 1993 | A |
5299260 | Shaio | Mar 1994 | A |
5311422 | Loftin et al. | May 1994 | A |
5315711 | Barone et al. | May 1994 | A |
5317628 | Misholi et al. | May 1994 | A |
5347306 | Nitta | Sep 1994 | A |
5388252 | Dreste et al. | Feb 1995 | A |
5396371 | Henits et al. | Mar 1995 | A |
5432715 | Shigematsu et al. | Jul 1995 | A |
5465286 | Clare et al. | Nov 1995 | A |
5475625 | Glaschick | Dec 1995 | A |
5485569 | Goldman et al. | Jan 1996 | A |
5491780 | Fyles et al. | Feb 1996 | A |
5499291 | Kepley | Mar 1996 | A |
5535256 | Maloney et al. | Jul 1996 | A |
5572652 | Robusto et al. | Nov 1996 | A |
5577112 | Cambray et al. | Nov 1996 | A |
5590171 | Howe et al. | Dec 1996 | A |
5597312 | Bloom et al. | Jan 1997 | A |
5619183 | Ziegra et al. | Apr 1997 | A |
5696906 | Peters et al. | Dec 1997 | A |
5717879 | Moran et al. | Feb 1998 | A |
5721842 | Beasley et al. | Feb 1998 | A |
5737725 | Case | Apr 1998 | A |
5742670 | Bennett | Apr 1998 | A |
5748499 | Trueblood | May 1998 | A |
5778182 | Cathey et al. | Jul 1998 | A |
5784452 | Carney | Jul 1998 | A |
5790798 | Beckett, II et al. | Aug 1998 | A |
5796952 | Davis et al. | Aug 1998 | A |
5809247 | Richardson et al. | Sep 1998 | A |
5809250 | Kisor | Sep 1998 | A |
5825869 | Brooks et al. | Oct 1998 | A |
5835572 | Richardson, Jr. et al. | Nov 1998 | A |
5862330 | Anupam et al. | Jan 1999 | A |
5864772 | Alvarado et al. | Jan 1999 | A |
5884032 | Bateman et al. | Mar 1999 | A |
5907680 | Nielsen | May 1999 | A |
5918214 | Perkowski | Jun 1999 | A |
5923746 | Baker et al. | Jul 1999 | A |
5933811 | Angles et al. | Aug 1999 | A |
5944791 | Scherpbier | Aug 1999 | A |
5948061 | Merriman et al. | Sep 1999 | A |
5958016 | Chang et al. | Sep 1999 | A |
5964836 | Rowe et al. | Oct 1999 | A |
5978648 | George et al. | Nov 1999 | A |
5982857 | Brady | Nov 1999 | A |
5987466 | Greer et al. | Nov 1999 | A |
5990852 | Szamrej | Nov 1999 | A |
5991373 | Pattison et al. | Nov 1999 | A |
5991796 | Anupam et al. | Nov 1999 | A |
6005932 | Bloom | Dec 1999 | A |
6009429 | Greer et al. | Dec 1999 | A |
6014134 | Bell et al. | Jan 2000 | A |
6014647 | Nizzari et al. | Jan 2000 | A |
6018619 | Allard et al. | Jan 2000 | A |
6035332 | Ingrassia et al. | Mar 2000 | A |
6038544 | Machin et al. | Mar 2000 | A |
6039575 | L'Allier et al. | Mar 2000 | A |
6057841 | Thurlow et al. | May 2000 | A |
6058163 | Pattison et al. | May 2000 | A |
6061798 | Coley et al. | May 2000 | A |
6072860 | Kek et al. | Jun 2000 | A |
6076099 | Chen et al. | Jun 2000 | A |
6078894 | Clawson et al. | Jun 2000 | A |
6091712 | Pope et al. | Jul 2000 | A |
6108711 | Beck et al. | Aug 2000 | A |
6122665 | Bar et al. | Sep 2000 | A |
6122668 | Teng et al. | Sep 2000 | A |
6130668 | Stein | Oct 2000 | A |
6138139 | Beck et al. | Oct 2000 | A |
6144991 | England | Nov 2000 | A |
6146148 | Stuppy | Nov 2000 | A |
6148285 | Busardo | Nov 2000 | A |
6151622 | Fraenkel et al. | Nov 2000 | A |
6154771 | Rangan et al. | Nov 2000 | A |
6157808 | Hollingsworth | Dec 2000 | A |
6171109 | Ohsuga | Jan 2001 | B1 |
6182094 | Humpleman et al. | Jan 2001 | B1 |
6195679 | Bauersfeld et al. | Feb 2001 | B1 |
6201948 | Cook et al. | Mar 2001 | B1 |
6211451 | Tohgi et al. | Apr 2001 | B1 |
6225993 | Lindblad et al. | May 2001 | B1 |
6230197 | Beck et al. | May 2001 | B1 |
6236977 | Verba et al. | May 2001 | B1 |
6244758 | Solymar et al. | Jun 2001 | B1 |
6282548 | Burner et al. | Aug 2001 | B1 |
6286030 | Wenig et al. | Sep 2001 | B1 |
6286046 | Bryant | Sep 2001 | B1 |
6288753 | DeNicola et al. | Sep 2001 | B1 |
6289340 | Purnam et al. | Sep 2001 | B1 |
6301462 | Freeman et al. | Oct 2001 | B1 |
6301573 | McIlwaine et al. | Oct 2001 | B1 |
6324282 | McIlwaine et al. | Nov 2001 | B1 |
6347374 | Drake et al. | Feb 2002 | B1 |
6351467 | Dillon | Feb 2002 | B1 |
6353851 | Anupam et al. | Mar 2002 | B1 |
6360250 | Anupam et al. | Mar 2002 | B1 |
6370574 | House et al. | Apr 2002 | B1 |
6404857 | Blair et al. | Jun 2002 | B1 |
6411989 | Anupam et al. | Jun 2002 | B1 |
6418471 | Shelton et al. | Jul 2002 | B1 |
6459787 | McIlwaine et al. | Oct 2002 | B2 |
6487195 | Choung et al. | Nov 2002 | B1 |
6493758 | McLain | Dec 2002 | B1 |
6502131 | Vaid et al. | Dec 2002 | B1 |
6510220 | Beckett, II et al. | Jan 2003 | B1 |
6535909 | Rust | Mar 2003 | B1 |
6542602 | Elazer | Apr 2003 | B1 |
6546405 | Gupta et al. | Apr 2003 | B2 |
6560328 | Bondarenko et al. | May 2003 | B1 |
6583806 | Ludwig et al. | Jun 2003 | B2 |
6606657 | Zilberstein et al. | Aug 2003 | B1 |
6665644 | Kanevsky et al. | Dec 2003 | B1 |
6674447 | Chiang et al. | Jan 2004 | B1 |
6683633 | Holtzblatt et al. | Jan 2004 | B2 |
6697858 | Ezerzer et al. | Feb 2004 | B1 |
6724887 | Eilbacher et al. | Apr 2004 | B1 |
6738456 | Wrona et al. | May 2004 | B2 |
6757361 | Blair et al. | Jun 2004 | B2 |
6772396 | Cronin et al. | Aug 2004 | B1 |
6775377 | McIlwaine et al. | Aug 2004 | B2 |
6792575 | Samaniego et al. | Sep 2004 | B1 |
6810414 | Brittain | Oct 2004 | B1 |
6820083 | Nagy et al. | Nov 2004 | B1 |
6823384 | Wilson et al. | Nov 2004 | B1 |
6870916 | Henrikson et al. | Mar 2005 | B2 |
6901438 | Davis et al. | May 2005 | B1 |
6928407 | Ponceleon et al. | Aug 2005 | B2 |
6959078 | Eilbacher et al. | Oct 2005 | B1 |
6965886 | Govrin et al. | Nov 2005 | B2 |
7120900 | Atkin | Oct 2006 | B2 |
7124082 | Freedman | Oct 2006 | B2 |
20010000962 | Rajan | May 2001 | A1 |
20010032335 | Jones | Oct 2001 | A1 |
20010043697 | Cox et al. | Nov 2001 | A1 |
20020038363 | MacLean | Mar 2002 | A1 |
20020052948 | Baudu et al. | May 2002 | A1 |
20020065911 | Von Klopp et al. | May 2002 | A1 |
20020065912 | Catchpole et al. | May 2002 | A1 |
20020128925 | Angeles | Sep 2002 | A1 |
20020143925 | Pricer et al. | Oct 2002 | A1 |
20020165954 | Eshghi et al. | Nov 2002 | A1 |
20030055883 | Wiles et al. | Mar 2003 | A1 |
20030079020 | Gourraud et al. | Apr 2003 | A1 |
20030144900 | Whitmer | Jul 2003 | A1 |
20030154240 | Nygren et al. | Aug 2003 | A1 |
20030187642 | Ponceleon et al. | Oct 2003 | A1 |
20040073423 | Freedman | Apr 2004 | A1 |
20040100507 | Hayner et al. | May 2004 | A1 |
20040165717 | McIlwaine et al. | Aug 2004 | A1 |
20060074656 | Mathias et al. | Apr 2006 | A1 |
20070088547 | Freedman | Apr 2007 | A1 |
Number | Date | Country |
---|---|---|
0453128 | Oct 1991 | EP |
0773687 | May 1997 | EP |
0989720 | Mar 2000 | EP |
2369263 | May 2002 | GB |
WO 9843380 | Nov 1998 | WO |
WO 0016207 | Mar 2000 | WO |
Number | Date | Country | |
---|---|---|---|
20080082336 A1 | Apr 2008 | US |