It is desirable in many situations to record communications, such as telephone calls. This is particularly so in a contact center in which many agents may be handling hundreds of telephone calls each every day. Recording of these telephone calls can allow for quality assessment of agents, improvement of agent skills and/or dispute resolution, for example.
In this regard, assessment of call quality is time consuming and very subjective. For instance, a telephone call may last from a few seconds to a few hours and may be only one part of a customer transaction or may include several independent transactions. The demeanor of the caller is also influenced by events preceding the actual conversation—for example, the original reason for the call; the time spent waiting for the call to be answered or the number of times the customer has had to call before getting through to the right person.
Assessing the “quality” of a telephone call is therefore difficult and subject to error, even when done by an experienced supervisor or full-time quality assessor. Typically, the assessment of a call is structured according to a pre-defined set of criteria and sub-criteria. Some of these may relate to the initial greeting, the assessment of the reason for the call, the handling of the core reason for the call, confirming that the caller is satisfied with the handling of the call, and leaving the call.
Automation of the assessment process by provision of standardized forms and evaluation profiles have made such assessment more efficient, but it is still impractical to assess more than a tiny percentage of calls. Moreover, even with a structured evaluation form, different assessors will evaluate a call differently with quite a wide variation of scores.
In this regard, systems and methods for analyzing communication sessions using fragments are provided. An embodiment of such a system comprises. An embodiment of a method comprises: delineating fragments of an audio component of a communication session, each of the fragments being attributable to a party of the communication session and representing a contiguous period of time during which that party was speaking; and automatically assessing quality of at least some of the fragments such that a quality assessment of the communication session is determined.
An embodiment of such a system comprises a communication analyzer operative to: delineate fragments of an audio component of a communication session, each of the fragments being attributable to a party of the communication session and representing a contiguous period of time during which that party was speaking; and automatically assess quality of at least some of the fragments such that a quality assessment of the communication session is determined.
Computer readable media also are provided that have computer programs stored thereon for performing computer executable methods. In this regard, an embodiment of such a method comprises: delineating fragments of an audio component of a communication session, each of the fragments being attributable to a party of the communication session and representing a contiguous period of time during which that party was speaking; and automatically assessing quality of at least some of the fragments such that a quality assessment of the communication session is determined.
Other systems, methods, features and/or advantages of this disclosure will be or become apparent to one with skill in the art upon examination of the following drawings and detailed description. It is intended that all such additional systems, methods, features, and advantages be included within this description and be within the scope of the present disclosure.
Many aspects of the disclosure can be better understood with reference to the following drawings. The components in the drawings are not necessarily to scale, emphasis instead being placed upon clearly illustrating the principles of the present disclosure. Moreover, in the drawings, like reference numerals designate corresponding parts throughout the several views. While several embodiments are described in connection with these drawings, there is no intent to limit the disclosure to the embodiments disclosed herein.
Systems and methods for analyzing communication sessions using fragments are provided. In this regard, several exemplary embodiments will be described in which a recording of a telephone call is divided into more manageable fragments. By way of example, each of the fragments can be configured as contiguous speech of a party of the call. Specific behaviors can, therefore, be identified automatically as each fragment can be assessed more easily and unambiguously than if the behaviors were attempted to be identified from within an undivided call. By automating the assessment of call quality, a higher proportion of calls can be analyzed and hence a higher proportion of problem behaviors, processes and issues identified and addressed with less effort and cost than alternative manual strategies.
In this regard,
One should note that network 116 can include one or more different networks and/or types of networks. As a non-limiting, example, communications network 116 can include a Wide Area Network (WAN), the Internet, and/or a Local Area Network (LAN). Additionally, the communication analyzer can receive information corresponding to the communication session directly or from one or more various components that are not illustrated in
In operation, the analyzer of
In some embodiments, the parties to a communication session are recorded separately. In other embodiments, a session can be recorded in stereo, with one channel for the customer and one for the agent.
A vox detection analyzer of a communication analyzer can be used to determine when each party is talking. Such an analyzer typically detects an audio level above a pre-determined threshold for a sustained period (the “vox turn-on time”). Absence of audio is then determined by the audio level being below a pre-determined level (which may be different from the first level) for a pre-determined time (which may be different from the previous “turn-on” time). By identifying audio presence on each of the two channels of recording of a call results in a time series through the call that identifies who, if anyone, is talking at any given time in the series.
Once audio presence is determined, the call can be broken into “fragments” representing the period in which each party talks on the call. In this regard, a fragment can be delimited by one or more of the following:
A schematic representation of an exemplary communication session and corresponding call fragments is depicted in
Having broken a call into fragments, the system can analyze the sequence and duration of the fragments. By way of example, for each fragment, some embodiments can determine one or more of the following:
In some embodiments, statistics of the call can be deduced from the individual call fragment data. These may include one or more of:
As mentioned above, a communication analyzer can automatically assess quality of a communication session by assessing quality of at least some of its fragments. In order to accomplish quality assessment, various techniques can be used. By way of example, fragment training can be used, in which manual scoring is applied to one or more fragments and then the system applies comparable scoring to fragments that are evaluated to be similar.
In this regard, in some embodiments, individual fragments or sequences of two or more successive fragments are presented to the user of the system, typically with a clear indication of which party is speaking and the delay between the two fragments. The user listens to some or all of the fragments and then indicates, such as via a form on a screen provided by a scoring analyzer, whether the fragments relate to a good, bad or. “indifferent” interaction, for example. In many cases, the isolated fragments will not indicate a particularly good or bad experience but in a small percentage of cases such fragments can indicate a particularly good or bad experience. By way of example, a long delay between two successive fragments can be considered “bad” but in other cases, the words uttered, the tone or volume of the utterance may indicate a good or bad experience. This manual (human) assessment of the quality of the fragment sequence can be stored and used to drive machine learning algorithms.
In some embodiments, in contrast to a scoring of good, bad or indifferent, a continuous scale (e.g., 0-10 rating) can be used. Additionally, multiple criteria may be presented, each of which the user can choose to provide feedback on, such as “Customer empathy” and “Persuasiveness” for example. In many cases, any particular fragment or fragment pair will not be particularly good or bad but as long as those cases that are at one extreme or the other are identified, the system will receive valuable input.
In many cases, however, the fragments presented to the user may not show anything significant but may indicate that the previous or next fragments may provide more valuable input. Because of this, the user may be presented with controls that allow the user to play the previous and/or next fragment. Thus, the user can provide feedback on those fragments and/or move on to the next or previous fragment.
Where users assess whole calls, the overall quality assessment of the call and the individual criteria/sub-criteria may be noted. These are then applied to either all fragments or, where specific criteria are explicitly linked to particular regions of the call (e.g. “Quality of Greeting”, “Confirmation of resolution”), to the fragments of the call according to a weighting function. In those embodiments that use weighting, a different weighting can be applied to each fragment according to the distance of that fragment from the start of the call, the end of the call, or from some other known point within the call. It should be noted that point from which the fragment is measured for weighting purposes can be identified by an event that occurred during the call. The fragment can be subsequently stored with a timestamp linking the fragment to that point, e.g., event, in the call.
As mentioned before, manual quality assessments can then be used by the system for enabling automated scoring of other fragments that have not been manually scored. Additionally or alternatively, some embodiments can be provided with a number of heuristics, such as predefined rules, that the system can use during automated analysis by a scoring analyzer. In this regard, such rules can involve one or more of the following:
The human input, e.g., predefined rules and/or examples of manually assessed calls/fragments, can be used as input for a variety of machine learning techniques such as neural nets and Bayesian filters expert systems, for example. By identifying the characteristics of the call fragments that lead to the assessments given, a system employing such a technique can learn to identify the relevant characteristics that differentiate “good” from “bad” calls.
An example of this approach is a Bayesian probability assessment of the content of a call fragment. In such an approach, a transcript of a call may be processed and the frequency of the occurrence of each word within the customer's speech is stored. The proportion of “good” fragments in which each word occurs and the proportion of “bad” fragments in which each word occurs is then noted. These probabilities can then be used to assess whether other fragments are likely to be “good” or “bad” based on the words within those and the likelihood of each of the words to be found in a “good” or “bad” fragment. From the many words within a given fragment, those that provide the strongest discrimination of good versus bad fragment can be used and the remainder discarded. Of the N strongest indicators, an overall assessment can be made of good versus bad.
Typically, the other attributes of a fragment, such as those described above, can be used as potential indicators of the good/bad decision. These inputs may be provided to train a neural network or other machine learning system.
In some embodiments, feedback can be used to further enhance analysis. Specifically, since a high proportion of fragment sequences do not indicate particularly good (or bad) experiences, it can be beneficial if a system presents to a user those fragments that is has identified as good or bad. By presenting these fragments and showing the assessment (good or bad) that the system has determined, the user can be enabled to confirm or correct the assessment. This input can then be fed back into the training algorithm either reinforcing the correct assessment or helping to avoid repetition of the mistake made.
In this regard,
In block 418, scores produced during automated analysis are presented to a user for review. By way of example, the scores can be presented to the user via a graphical user interface displayed on a display device. Then, in block 420, inputs from the user either confirming or correcting the scores are provided, with these inputs being used to update the analysis algorithm of the communication analyzer.
Further, the local interface may include address, control, and/or data connections to enable appropriate communications among the aforementioned components. The processor may be a hardware device for executing software, particularly software stored in memory.
The memory can include any one or combination of volatile memory elements (e.g., random access memory (RAM, such as DRAM, SRAM, SDRAM, etc.)) and nonvolatile memory elements (e.g., ROM, hard drive, tape, CDROM, etc.). Moreover, the memory may incorporate electronic, magnetic, optical, and/or other types of storage media. Note that the memory can have a distributed architecture, where various components are situated remote from one another, but can be accessed by the processor. Additionally, the memory, includes an operating system 510, as well as instructions associated with a speech recognition engine 512, a phonetic analyzer 514, a vox detection analyzer 516 and a scoring analyzer 518. Exemplary embodiments of each of which are described above.
It should be noted that embodiments of one or more of the systems described herein could be used to perform an aspect of speech analytics (i.e., the analysis of recorded speech or real-time speech), which can be used to perform a variety of functions, such as automated call evaluation, call scoring, quality monitoring, quality assessment and compliance/adherence. By way of example, speech analytics can be used to compare a recorded interaction to a script (e.g., a script that the agent was to use during the interaction). In other words, speech analytics can be used to measure how well agents adhere to scripts, identify which agents are “good” sales people and which ones need additional training. As such, speech analytics can be used to find agents who do not adhere to scripts. Yet in another example, speech analytics can measure script effectiveness, identify which scripts are effective and which are not, and find, for example, the section of a script that displeases or upsets customers (e.g., based on emotion detection). As another example, compliance with various policies can be determined. Such may be in the case of, for example, the collections industry where it is a highly regulated business and agents must abide by many rules. The speech analytics of the present disclosure may identify when agents are not adhering to their scripts and guidelines. This can potentially improve collection effectiveness and reduce corporate liability and risk.
In this regard, various types of recording components can be used to facilitate speech analytics. Specifically, such recording components can perform one or more various functions such as receiving, capturing, intercepting and tapping of data. This can involve the use of active and/or passive recording techniques, as well as the recording of voice and/or screen data.
It should be noted that speech analytics can be used in conjunction with such screen data (e.g., screen data captured from an agent's workstation/PC) for evaluation, scoring, analysis, adherence and compliance purposes, for example. Such integrated functionalities improve the effectiveness and efficiency of, for example, quality assurance programs. For example, the integrated function can help companies to locate appropriate calls (and related screen interactions) for quality monitoring and evaluation. This type of “precision” monitoring improves the effectiveness and productivity of quality assurance programs.
Another aspect that can be accomplished involves fraud detection. In this regard, various manners can be used to determine the identity of a particular speaker. In some embodiments, speech analytics can be used independently and/or in combination with other techniques for performing fraud detection. Specifically, some embodiments can involve identification of a speaker (e.g., a customer) and correlating this identification with other information to determine whether a fraudulent claim for example is being made. If such potential fraud is identified, some embodiments can provide an alert. For example, the speech analytics of the present disclosure may identify the emotions of callers. The identified emotions can be used in conjunction with identifying specific concepts to help companies spot either agents or callers/customers who are involved in fraudulent activities. Referring back to the collections example outlined above, by using emotion and concept detection, companies can identify which customers are attempting to mislead collectors into believing that they are going to pay. The earlier the company is aware of a problem account, the more recourse options they will have. Thus, the speech analytics of the present disclosure can function as an early warning system to reduce losses.
Additionally, included in this disclosure are embodiments of integrated workforce optimization platforms, as discussed in U.S. application Ser. No. 11/359,356, filed on Feb. 22, 2006, entitled “Systems and Methods for Workforce Optimization,” which is hereby incorporated by reference in its entirety. At least one embodiment of an integrated workforce optimization platform integrates: (1) Quality Monitoring/Call Recording—voice of the customer; the complete customer experience across multimedia touch points; (2) Workforce Management—strategic forecasting and scheduling that drives efficiency and adherence, aids in planning, and helps facilitate optimum staffing and service levels; (3) Performance Management—key performance indicators (KPIs) and scorecards that analyze and help identify synergies, opportunities and improvement areas; (4) e-Learning—training, new information and protocol disseminated to staff, leveraging best practice customer interactions and delivering learning to support development; and/or (5) Analytics—deliver insights from customer interactions to drive business performance. By way of example, the integrated workforce optimization process and system can include planning and establishing goals—from both an enterprise and center perspective—to ensure alignment and objectives that complement and support one another. Such planning may be complemented with forecasting and scheduling of the workforce to ensure optimum service levels. Recording and measuring performance may also be utilized, leveraging quality monitoring/call recording to assess service quality and the customer experience.
One should note that the flowcharts included herein show the architecture, functionality, and/or operation of a possible implementation of software. In this regard, each block can be interpreted to represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that in some alternative implementations, the functions noted in the blocks may occur out of the order. For example, two blocks shown in succession may in fact be executed substantially concurrently or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.
One should note that any of the programs listed herein, which can include an ordered listing of executable instructions for implementing logical functions (such as depicted in the flowcharts), can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. In the context of this document, a “computer-readable medium” can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. The computer readable medium can be, for example but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device. More specific examples (a nonexhaustive list) of the computer-readable medium could include an electrical connection (electronic) having one or more wires, a portable computer diskette (magnetic), a random access memory (RAM) (electronic), a read-only memory (ROM) (electronic), an erasable programmable read-only memory (EPROM or Flash memory) (electronic), an optical fiber (optical), and a portable compact disc read-only memory (CDROM) (optical). In addition, the scope of the certain embodiments of this disclosure can include embodying the functionality described in logic embodied in hardware or software-configured mediums.
It should be emphasized that the above-described embodiments are merely possible examples of implementations. Many variations and modifications may be made to the above-described embodiments. All such modifications and variations are intended to be included herein within the scope of this disclosure.
Number | Name | Date | Kind |
---|---|---|---|
3594919 | De Bell et al. | Jul 1971 | A |
3705271 | De Bell et al. | Dec 1972 | A |
4510351 | Costello et al. | Apr 1985 | A |
4684349 | Ferguson et al. | Aug 1987 | A |
4694483 | Cheung | Sep 1987 | A |
4763353 | Canale et al. | Aug 1988 | A |
4815120 | Kosich | Mar 1989 | A |
4924488 | Kosich | May 1990 | A |
4953159 | Hayden et al. | Aug 1990 | A |
5012519 | Adlersberg et al. | Apr 1991 | A |
5016272 | Stubbs et al. | May 1991 | A |
5101402 | Chiu et al. | Mar 1992 | A |
5117225 | Wang | May 1992 | A |
5210789 | Jeffus et al. | May 1993 | A |
5239460 | LaRoche | Aug 1993 | A |
5241625 | Epard et al. | Aug 1993 | A |
5267865 | Lee et al. | Dec 1993 | A |
5299260 | Shaio | Mar 1994 | A |
5311422 | Loftin et al. | May 1994 | A |
5315711 | Barone et al. | May 1994 | A |
5317628 | Misholi et al. | May 1994 | A |
5347306 | Nitta | Sep 1994 | A |
5388252 | Dreste et al. | Feb 1995 | A |
5396371 | Henits et al. | Mar 1995 | A |
5432715 | Shigematsu et al. | Jul 1995 | A |
5465286 | Clare et al. | Nov 1995 | A |
5475625 | Glaschick | Dec 1995 | A |
5485569 | Goldman et al. | Jan 1996 | A |
5491780 | Fyles et al. | Feb 1996 | A |
5499291 | Kepley | Mar 1996 | A |
5535256 | Maloney et al. | Jul 1996 | A |
5572652 | Robusto et al. | Nov 1996 | A |
5577112 | Cambray et al. | Nov 1996 | A |
5590171 | Howe et al. | Dec 1996 | A |
5597312 | Bloom et al. | Jan 1997 | A |
5619183 | Ziegra et al. | Apr 1997 | A |
5696906 | Peters et al. | Dec 1997 | A |
5717879 | Moran et al. | Feb 1998 | A |
5721842 | Beasley et al. | Feb 1998 | A |
5742670 | Bennett | Apr 1998 | A |
5748499 | Trueblood | May 1998 | A |
5778182 | Cathey et al. | Jul 1998 | A |
5784452 | Carney | Jul 1998 | A |
5790798 | Beckett, II et al. | Aug 1998 | A |
5796952 | Davis et al. | Aug 1998 | A |
5809247 | Richardson et al. | Sep 1998 | A |
5809250 | Kisor | Sep 1998 | A |
5825869 | Brooks et al. | Oct 1998 | A |
5835572 | Richardson, Jr. et al. | Nov 1998 | A |
5862330 | Anupam et al. | Jan 1999 | A |
5864772 | Alvarado et al. | Jan 1999 | A |
5884032 | Bateman et al. | Mar 1999 | A |
5907680 | Nielsen | May 1999 | A |
5918214 | Perkowski | Jun 1999 | A |
5923746 | Baker et al. | Jul 1999 | A |
5933811 | Angles et al. | Aug 1999 | A |
5944791 | Scherpbier | Aug 1999 | A |
5946375 | Pattison et al. | Aug 1999 | A |
5948061 | Merriman et al. | Sep 1999 | A |
5958016 | Chang et al. | Sep 1999 | A |
5964836 | Rowe et al. | Oct 1999 | A |
5978648 | George et al. | Nov 1999 | A |
5982857 | Brady | Nov 1999 | A |
5987466 | Greer et al. | Nov 1999 | A |
5990852 | Szamrej | Nov 1999 | A |
5991373 | Pattison et al. | Nov 1999 | A |
5991796 | Anupam et al. | Nov 1999 | A |
6005932 | Bloom | Dec 1999 | A |
6009429 | Greer et al. | Dec 1999 | A |
6014134 | Bell et al. | Jan 2000 | A |
6014647 | Nizzari et al. | Jan 2000 | A |
6018619 | Allard et al. | Jan 2000 | A |
6035332 | Ingrassia et al. | Mar 2000 | A |
6038544 | Machin et al. | Mar 2000 | A |
6039575 | L'Allier et al. | Mar 2000 | A |
6049776 | Donnelly et al. | Apr 2000 | A |
6057841 | Thurlow et al. | May 2000 | A |
6058163 | Pattison et al. | May 2000 | A |
6061798 | Coley et al. | May 2000 | A |
6072860 | Kek et al. | Jun 2000 | A |
6076099 | Chen et al. | Jun 2000 | A |
6078894 | Clawson et al. | Jun 2000 | A |
6091712 | Pope et al. | Jul 2000 | A |
6108711 | Beck et al. | Aug 2000 | A |
6122665 | Bar et al. | Sep 2000 | A |
6122668 | Teng et al. | Sep 2000 | A |
6130668 | Stein | Oct 2000 | A |
6138139 | Beck et al. | Oct 2000 | A |
6144991 | England | Nov 2000 | A |
6146148 | Stuppy | Nov 2000 | A |
6151622 | Fraenkel et al. | Nov 2000 | A |
6154771 | Rangan et al. | Nov 2000 | A |
6157808 | Hollingsworth | Dec 2000 | A |
6171109 | Ohsuga | Jan 2001 | B1 |
6182094 | Humpleman et al. | Jan 2001 | B1 |
6195679 | Bauersfeld et al. | Feb 2001 | B1 |
6201948 | Cook et al. | Mar 2001 | B1 |
6211451 | Tohgi et al. | Apr 2001 | B1 |
6225993 | Lindblad et al. | May 2001 | B1 |
6230197 | Beck et al. | May 2001 | B1 |
6236977 | Verba et al. | May 2001 | B1 |
6244758 | Solymar et al. | Jun 2001 | B1 |
6282548 | Burner et al. | Aug 2001 | B1 |
6286030 | Wenig et al. | Sep 2001 | B1 |
6286046 | Bryant | Sep 2001 | B1 |
6288753 | DeNicola et al. | Sep 2001 | B1 |
6289340 | Purnam et al. | Sep 2001 | B1 |
6301462 | Freeman et al. | Oct 2001 | B1 |
6301573 | McIlwaine et al. | Oct 2001 | B1 |
6324282 | McIlwaine et al. | Nov 2001 | B1 |
6327364 | Shaffer et al. | Dec 2001 | B1 |
6347374 | Drake et al. | Feb 2002 | B1 |
6351467 | Dillon | Feb 2002 | B1 |
6353851 | Anupam et al. | Mar 2002 | B1 |
6360250 | Anupam et al. | Mar 2002 | B1 |
6370574 | House et al. | Apr 2002 | B1 |
6404857 | Blair et al. | Jun 2002 | B1 |
6411989 | Anupam et al. | Jun 2002 | B1 |
6418471 | Shelton et al. | Jul 2002 | B1 |
6459787 | McIlwaine et al. | Oct 2002 | B2 |
6487195 | Choung et al. | Nov 2002 | B1 |
6493758 | McLain | Dec 2002 | B1 |
6502131 | Vaid et al. | Dec 2002 | B1 |
6510220 | Beckett, II et al. | Jan 2003 | B1 |
6535909 | Rust | Mar 2003 | B1 |
6542602 | Elazar | Apr 2003 | B1 |
6546405 | Gupta et al. | Apr 2003 | B2 |
6560328 | Bondarenko et al. | May 2003 | B1 |
6583806 | Ludwig et al. | Jun 2003 | B2 |
6606657 | Zilberstein et al. | Aug 2003 | B1 |
6665644 | Kanevsky et al. | Dec 2003 | B1 |
6674447 | Chiang et al. | Jan 2004 | B1 |
6683633 | Holtzblatt et al. | Jan 2004 | B2 |
6697858 | Ezerzer et al. | Feb 2004 | B1 |
6724887 | Eilbacher et al. | Apr 2004 | B1 |
6738456 | Wrona et al. | May 2004 | B2 |
6757361 | Blair et al. | Jun 2004 | B2 |
6772396 | Cronin et al. | Aug 2004 | B1 |
6775377 | McIlwaine et al. | Aug 2004 | B2 |
6792575 | Samaniego et al. | Sep 2004 | B1 |
6810414 | Brittain | Oct 2004 | B1 |
6820083 | Nagy et al. | Nov 2004 | B1 |
6823384 | Wilson et al. | Nov 2004 | B1 |
6870916 | Henrikson et al. | Mar 2005 | B2 |
6901438 | Davis et al. | May 2005 | B1 |
6915246 | Gusler et al. | Jul 2005 | B2 |
6959078 | Eilbacher et al. | Oct 2005 | B1 |
6965886 | Govrin et al. | Nov 2005 | B2 |
7043008 | Dewan | May 2006 | B1 |
7076427 | Scarano et al. | Jul 2006 | B2 |
7664641 | Pettay et al. | Feb 2010 | B1 |
20010000962 | Rajan | May 2001 | A1 |
20010032335 | Jones | Oct 2001 | A1 |
20010043697 | Cox et al. | Nov 2001 | A1 |
20020038363 | MacLean | Mar 2002 | A1 |
20020052948 | Baudu et al. | May 2002 | A1 |
20020065911 | Von Klopp et al. | May 2002 | A1 |
20020065912 | Catchpole et al. | May 2002 | A1 |
20020128925 | Angeles | Sep 2002 | A1 |
20020143925 | Pricer et al. | Oct 2002 | A1 |
20020165954 | Eshghi et al. | Nov 2002 | A1 |
20030055883 | Wiles et al. | Mar 2003 | A1 |
20030079020 | Gourraud et al. | Apr 2003 | A1 |
20030144900 | Whitmer | Jul 2003 | A1 |
20030154240 | Nygren et al. | Aug 2003 | A1 |
20040100507 | Hayner et al. | May 2004 | A1 |
20040117185 | Scarano et al. | Jun 2004 | A1 |
20040165717 | Mcllwaine et al. | Aug 2004 | A1 |
20040249650 | Freedman et al. | Dec 2004 | A1 |
20050138560 | Lee et al. | Jun 2005 | A1 |
20070071206 | Gainsboro et al. | Mar 2007 | A1 |
Number | Date | Country |
---|---|---|
0453128 | Oct 1991 | EP |
0773687 | May 1997 | EP |
0989720 | Mar 2000 | EP |
2369263 | May 2002 | GB |
WO 9843380 | Nov 1998 | WO |
WO 0016207 | Mar 2000 | WO |
Number | Date | Country | |
---|---|---|---|
20080080385 A1 | Apr 2008 | US |