Systems and methods for analyzing communication sessions

Information

  • Patent Grant
  • 7885813
  • Patent Number
    7,885,813
  • Date Filed
    Friday, September 29, 2006
    17 years ago
  • Date Issued
    Tuesday, February 8, 2011
    13 years ago
Abstract
Systems and methods for analyzing communication sessions are provided. A representative method includes: recording the communication session; identifying those portions of the communication session not containing speech of at least one of the agent and the customer; and performing post-recording processing on the recording of the communication session based, at least in part, on whether the portions contain speech of at least one of the agent and the customer.
Description
TECHNICAL FIELD

The present disclosure generally relates to analysis of communication sessions.


DESCRIPTION OF THE RELATED ART

Contact centers are staffed by agents who are trained to interact with customers. Although capable of conducting these interactions using various media, the most common scenario involves voice communications using telephones. In this regard, when a customer contacts a contact center by phone, the call is typically provided to an automated call distributor (ACD) that is responsible for routing the call to an appropriate agent. Prior to an agent receiving the call, however, the call can be placed on hold by the ACD for a variety of reasons. By way of example, the ACD can enable an interactive voice response system (IVR) to query the user for information so that an appropriate queue for handling the call can be determined. As another example, the ACD can place the call on hold until an agent is available for handling the call. In such an on hold period, music (which is referred to as “music on hold”) and/or various announcements (which can be prerecorded or use synthetic human voices) can be provided to the customer.


For a number of reasons, such as compliance regulations, it is commonplace to record communication sessions. Notably, an entire call (including on hold periods) can be recorded. However, a significant portion of such a recording can be attributed to music on hold, announcements and/or IVR queries that do not tend to provide substantive information for analysis.


SUMMARY

In this regard, systems and methods for analyzing communication sessions are provided. An exemplary embodiment of such a system comprises a voice analysis system that is operative to receive information corresponding to a communication session and perform post-recording processing on the information. The voice analysis system is configured to exclude a portion of the information corresponding to the communication session, that is not attributable to speech of at least one party of the communication session, from post-recording processing.


An exemplary embodiment of a method for analyzing communication sessions comprises excluding a portion of the communication session, not attributable to at least one party of the communication session, from post-recording processing.


Another exemplary embodiment of a method for analyzing communication sessions comprises: recording the communication session; identifying those portions of the communication session not containing speech of at least one of the agent and the customer; and performing post-recording processing on the recording of the communication session based, at least in part, on whether the portions contain speech of at least one of the agent and the customer.


Other systems, methods, features and/or advantages will be or may become apparent to one with skill in the art upon examination of the following drawings and detailed description. It is intended that all such additional systems, methods, features and/or advantages be included within this description and be protected by the accompanying claims.





BRIEF DESCRIPTION OF THE DRAWINGS

The components in the drawings are not necessarily to scale relative to each other. Like reference numerals designate corresponding parts throughout the several views.



FIG. 1 is a schematic diagram illustrating an embodiment of a system for analyzing communication sessions.



FIG. 2 is a flowchart depicting functionality (or method steps) associated with an embodiment of a system for analyzing communication sessions.



FIG. 3 is a schematic diagram illustrating another embodiment of a system for analyzing communication sessions.



FIG. 4 is a flowchart depicting functionality (or method steps) associated with an embodiment of a system for analyzing communication sessions.



FIG. 5 is a schematic diagram of an embodiment of a system for analyzing communication sessions that is implemented by a computer.





DETAILED DESCRIPTION

As will be described in detail here with reference to several exemplary embodiments, systems and methods for analyzing communication sessions can potentially enhance post-recording processing of communication sessions. In this regard, it is known that compliance recording and/or recording of communication sessions for other purposes involves recording various types of information that are of relatively limited substantive use. By way of example, music, announcements and/or queries by IVR systems commonly are recorded. Such information can cause problems during post-recording processing in that these types of information can make it difficult for accurate processing by speech recognition and phonetic analysis systems. Additionally, since such information affords relatively little substantive value, inclusion of such information tends to use recording resources, i.e., the information takes up space in memory, thereby incurring cost without providing corresponding value.


Referring now to FIG. 1, FIG. 1 depicts an exemplary embodiment of a system for analyzing communication sessions that incorporates a voice analysis system 102. Voice analysis system 102 receives information corresponding to a communication session, such as a session occurring between a customer 104 and an agent 106 via a communication network 108. As a non-limiting, example, communications network 108 can include a Wide Area Network (WAN), the Internet and/or a Local Area Network (LAN). In some embodiments, the voice analysis system can receive the information corresponding to the communication session from a data storage device, e.g., a hard drive, that is storing a recording of the communication session.



FIG. 2 depicts the functionality (or method) associated with an embodiment of a system for analyzing communications, such as the embodiment of FIG. 1. In this regard, the depicted functionality involves excluding a portion of a communication session from post-recording processing (block 202). That is, information that does not correspond to a voice component of a party to the communication session, e.g., the agent and the customer, can be excluded. Notably, various types of information, such as music, announcements and/or queries of an IVR system are not attributable to one of the parties. As such, these types of information can be excluded from post-recording processing (block 204), which can involve speech recognition and/or phonetic analysis.


In some embodiments, information that does not correspond to a voice component of any party to the communication session is deleted from the recording of the communication session. As another example, such information could be identified and any post-recording processing algorithms could ignore those portions, thereby enabling processing resources to be devoted to analyzing other portions of the recordings.


As a further example, at least with respect to announcements and queries from IVR systems that involve pre-recorded or synthetic human voices (i.e., computer generated voices), information regarding those audio components can be provided to the post-recording processing algorithms so that analysis can be accomplished efficiently. In particular, if the processing system has knowledge of the actual words that are being spoken in those audio components, the processing algorithm can more quickly and accurately convert those audio components to transcript form (as in the case of speech recognition) or to phoneme sequences (as in the case of phonetic analysis).



FIG. 3 depicts another exemplary embodiment of a system for analyzing communication sessions. In this regard, system 300 is implemented in a contact center environment that includes a voice analysis system 302. Voice analysis system 302 incorporates an identification system 304 and a post-recording processing system 306. The post-recording processing system incorporates a speech recognition system 310 and a phonetic analysis system 312.


The contact center also incorporates an automated call distributor (ACD) 314 that facilitates routing of a call between the customer and the agent. The communication session is recorded by a recording system 316 that is able to provide information corresponding to the communication session to the voice analysis system for analysis.


In operation, the voice analysis system receives information corresponding to a communication session that occurs between a customer 320 and an agent 322, with the session occurring via a communication network 324. Specifically, the ACD routes the call so that the customer and agent can interact and the recorder records the communication session.


With respect to the voce analysis system 302, the identification system 304 analyzes the communication session (e.g., from the recording) to determine whether post-recording processing should be conducted with respect to each of the recorded portions of the session. Based on the determinations, which can be performed in various manners (examples of which are described in detail later), processing can be performed by the post-recording processing system 306. By way of example, the embodiment of FIG. 3 includes both a speech recognition system and a phonetic analysis system that can be used either individually or in combination to process portions of the communication session.


Notably, the ACD 314 can be responsible for providing various announcements to the customer. In some embodiments, these announcements can be provided via synthetic human voices and/or recordings. It should be noted that other types of announcements can be present in recordings that are not provided by an ACD. By way of example, a telephone central office can introduce announcements that could be recorded. As another example, voice mail systems can provide announcements. The principles described herein relating to treatment of ACD announcements are equally applicable to such other forms of announcements regardless of the manner in which the announcements become associated with a recording.


Additionally or alternatively, the ACD can facilitate interaction of the customer with an IVR system that queries the customer for various information. Additionally or alternatively, the ACD can provide music on hold, such as when the call is queued awaiting pickup by an agent. It should be noted that other types of music can be present in recordings that are not provided by an ACD. By way of example, a customer could be speaking to an agent when music is being played in the background. The principles described herein relating to treatment of ACD music on hold are equally applicable to such other forms of music regardless of the manner in which the music becomes associated with a recording.



FIG. 4 is a flowchart depicting functionality of an embodiment of a system for analyzing communication sessions, such as the system depicted in FIG. 3. In this regard, the functionality (or method steps) may be construed as beginning at block 402, in which a communication session is recorded. In block 404, portions of the communication session are identified as containing music, announcements and/or IVR audio. Then, as depicted in block 406, a determination is made as to whether the music, announcements and/or IVR audio that were identified are to be deleted from the recording. If it is determined that the music, announcements and/or IVR audio are to be deleted, the process proceeds to block 408, in which deletion from the recording is performed. The the process proceeds to block 410. If, however, it is determined that the music, announcements and/or IVR audio are not to be deleted, the process also proceeds to block 410.


In block 410, information regarding the presence of the music, announcements and/or IVR audio is used to influence post-recording processing of a communication session. By way of example, the corresponding portions of the recording can be designated or otherwise flagged with information indicating that music, announcements and/or IVR audio is present. Other manners in which such a post-recording process can be influenced will be described in greater detail later.


Thereafter, the process proceeds to block 412, in which post-recording processing is performed. In particular, such post-recording processing can include at least one of speech recognition and phonetic analysis.


With respect to the identification of various portions of a communication session, a voice analysis system can be used to distinguish those portions of a communication session that include voice components of a party to the communication from other audio components. Depending upon the particular embodiment, such a voice analysis system could identify the voice components of the parties as being suitable for both post-recording analysis and/or could identify other portions as not being suitable for post-recording analysis.


In some embodiments, a voice analysis system is configured to identify dual tone multi-frequency (DTMF) tones, i.e., the sounds generated by a touch tone phone. In some of these embodiments, the tones can be removed from the recording. In removing such tones prior to speech recognition and/or phonetic analysis, such analysis may be more effective as the DTMF tones may no longer mask some of the recorded speech.


As an additional benefit, the desire for improved security of personal information may require in some circumstances that such DTMF tones not be stored or otherwise made available for later access. For instance, a customer responding to an IVR system query may input DTMF tones corresponding to a social security number or a bank account number. Clearly, recording such tones could increase the likelihood of this information being compromised. However, an embodiment of a voice analysis system that deletes these tones does not incur this potential liability.


In some embodiments, signaling tones, such as distant and local ring tones and busy equipment signals, can be identified. With respect to the identification of ring tones, identification of regional tones can provide additional information about a call that may be useful. By way of example, such tones could identify the region to which an agent placed a call while a customer was on hold. Moreover, once identified, the signaling tones can be removed from the recording of the communication session.


Regional identification of audio components also can occur in some embodiments with respect to announcements. In this regard, some regions provide unique announcements, such as those originating from a central telephone office. For example, in the United States an announcement may be as follows, “I am sorry, all circuits are busy. Please try your call again later.” Identifying such an audio component in a recording could then inform a user that a party to the communication session attempted to place a call to the United States.


Various techniques can be used for differentiating the various portions of a communication session. In this regard, energy envelope analysis, which involves graphically displaying the amplitude of audio of a communication session, can be used to distinguish music from voice components. This is because music tends to follow established tempo patterns and oftentimes exhibits higher energy levels than voice components.


In some embodiments, such identification can be accomplished manually, semi-automatically or automatically. By way of example, a semi-automatic mode of identification can include providing a user with a graphical user interface that depicts an energy envelope corresponding to a communication session. The graphical user interface could then provide the user with a sliding window that can be used to identify contiguous portions of the communication session. In this regard, the sliding window can be altered to surround a portion of the recording that is identified, such as by listening to that portion, as music. The portion of the communication session that has been identified within such a sliding window as being attributable to music can then be automatically compared by the system to other portions of the recorded communication session. When a suitable match is automatically identified, each such portion also can be designated as being attributable to music.


Additionally or alternatively, some embodiments of a voice analyzer system can differentiate between announcements and tones that are regional in nature. This can be accomplished by comparing the recorded announcements and/or tones to a database of known announcements and tones to check for parity. Once designations are made about the portions of a communication sessions containing regional characteristics, the actual audio can be discarded or otherwise ignored during post-recording processing. In this manner, speech analysis does not need to be undertaken with respect to those portions of the audio, thereby allowing speech analysis systems to devote more time and resources to other portions of the communication session. Notably, however, the aforementioned designations can be retained in the records of the communication session so that information corresponding to the occurrence of such characteristics is not discarded.


In some embodiments, a database can be used for comparative purposes to identify variable announcements. That is an announcement that includes established fields, within which information can be changed. An example of such a variable announcement includes an airline reservation announcement that indicates current rate promotions. Such an announcement usually includes a fixed field identifying the airline and then variable fields identifying a destination and a fare. Knowledge of the first variable field involving a destination could be used to simplify post-recording processing in some embodiments, whereas other embodiments may avoid processing of that portion once a determination is made that the portion corresponds to an announcement. Alternatively, a hybrid approach could involve not processing of audio corresponding to fixed fields and allowing post-recording processing on the audio corresponding to the variable fields.


Another form of variable announcements relates to voicemail systems. In this regard, voicemail systems use variable fields to inform a caller that a voice message can be recorded. In some embodiments, these announcements can be identified and handled such as described before. One notable distinction, however, involves the use of the actual voicemail message that is left by a caller. If such a caller indicates that the message is “private,” some embodiments can delete the message or otherwise avoid post-recording processing of the message.



FIG. 6 is a schematic diagram illustrating an embodiment of system for analyzing communication sessions that is implemented by a computer. Generally, in terms of hardware architecture, system 500 includes a processor 502, memory 504, and one or more input and/or output (I/O) devices interface(s) 506 that are communicatively coupled via a local interface 508. The local interface 506 can include, for example but not limited to, one or more buses or other wired or wireless connections. The local interface may have additional elements, which are omitted for simplicity, such as controllers, buffers (caches), drivers, repeaters, and receivers to enable communications.


Further, the local interface may include address, control, and/or data connections to enable appropriate communications among the aforementioned components. The processor may be a hardware device for executing software, particularly software stored in memory.


The memory can include any one or combination of volatile memory elements (e.g., random access memory (RAM, such as DRAM, SRAM, SDRAM, etc.)) and nonvolatile memory elements (e.g., ROM, hard drive, tape, CDROM, etc.). Moreover, the memory may incorporate electronic, magnetic, optical, and/or other types of storage media. Note that the memory can have a distributed architecture, where various components are situated remote from one another, but can be accessed by the processor. Additionally, the memory includes an operating system 510, as well as instructions associated with a voice analysis system 51, exemplary embodiments of which are described above.


One should note that the flowcharts included herein show the architecture, functionality and/or operation of a possible implementation of one or more embodiments that can be implemented in software and/or hardware. In this regard, each block can be interpreted to represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical functions. It should also be noted that in some alternative implementations, the functions noted in the blocks may occur out of the order in which depicted. For example, two blocks shown in succession may in fact be executed substantially concurrently or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.


One should note that any of the functions (such as depicted in the flowcharts) can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. In the context of this document, a “computer-readable medium” can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. The computer readable medium can be, for example but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device. More specific examples (a nonexhaustive list) of the computer-readable medium could include an electrical connection (electronic) having one or more wires, a portable computer diskette (magnetic), a random access memory (RAM) (electronic), a read-only memory (ROM) (electronic), an erasable programmable read-only memory (EPROM or Flash memory) (electronic), an optical fiber (optical), and a portable compact disc read-only memory (CDROM) (optical). In addition, the scope of the certain embodiments of this disclosure can include embodying the functionality described in logic embodied in hardware or software-configured mediums.


It should be emphasized that many variations and modifications may be made to the above-described embodiments. All such modifications and variations are intended to be included herein within the scope of this disclosure and protected by the following claims.

Claims
  • 1. A method for analyzing communication sessions between an agent of a contact center and a customer, said method comprising: recording the communication session at recording system executing on a computing device;identifying, at an identification system, those portions of the communication session not containing speech of at least one of the agent and the customer;identifying a presence of at least one of an announcement and audio from an interactive voice response (IVR) system;performing post-recording processing comprises providing access to information corresponding to a database of potential announcements and potential audio from the IVR system such that the post-recording processing can analyze the at least one of the announcement and the audio using the database; andperforming, at a computer-implemented post-processing system, post-recording processing on the recording of the communication session based, at least in part, on whether the portions contain speech of at least one of the agent and the customer.
  • 2. The method of claim 1, wherein: the method further comprises deleting the portions not attributable to at least one of the agent and the customer from the recording;performing post recording processing comprises performing post-recording processing on the remaining portions.
  • 3. The method of claim 1, wherein identifying comprises identifying presence of music in the communication session.
  • 4. The method of claim 1, further comprising deleting audio from the recording corresponding to a private voicemail message.
  • 5. A method for analyzing communication sessions comprising: recording the communication sessions at recording system executing on a computing device;identifying, at an identification system, a portion of the communication sessions not attributable to a voice component of at least one party of the communication session; andexcluding the portion of the communication session, not attributable to a voice component of at least one party of the communication session, from post-recording processing, wherein the portion of the communication session comprises audio from an interactive voice response (IVR) system.
  • 6. The method of claim 5, wherein the post recording processing comprises speech recognition processing.
  • 7. The method of claim 5, wherein the post-recording processing comprises phonetic analysis.
  • 8. The method of claim 5, wherein the portion of the communication session comprises music.
  • 9. The method of claim 8, wherein the music comprises music on hold.
  • 10. The method of claim 8, wherein the portion of the communication session comprises an announcement.
  • 11. The method of claim 10, wherein the announcement comprises a synthetic human voice.
  • 12. The method of claim 5, wherein the portion of the communication session comprises dual tone multi-frequency (DTMF) audio.
  • 13. The method of claim 5, further comprising recording the communication session.
  • 14. The method of claim 13, further comprising deleting the portion not attributable to the at least party from the recording.
  • 15. The method of claim 5, wherein excluding comprises identifying portions of the communication session not attributable to the at least one party.
  • 16. A system for analyzing communication sessions comprising: a recording system operative to record a communication session; anda voice analysis system operative to receive information corresponding to the communication session and perform post-recording processing on the information, wherein voice analysis system is configured to exclude a portion of the information corresponding to the communication session, that is not attributable to speech of at least one party of the communication session, from post-recording processing, wherein the portion of the communication session comprises audio from an interactive voice response (IVR) system.
  • 17. The system of claim 16, wherein the voice analysis system is configured to perform at least one of speech recognition and phonetic analysis during the post-recording processing.
  • 18. The system of claim 16, wherein the voice analysis system comprises an identification system operative to identify portions of the communication session containing music, announcements and synthetic human voices.
US Referenced Citations (170)
Number Name Date Kind
3594919 De Bell et al. Jul 1971 A
3705271 De Bell et al. Dec 1972 A
4510351 Costello et al. Apr 1985 A
4684349 Ferguson et al. Aug 1987 A
4694483 Cheung Sep 1987 A
4763353 Canale et al. Aug 1988 A
4815120 Kosich Mar 1989 A
4924488 Kosich May 1990 A
4953159 Hayden et al. Aug 1990 A
5016272 Stubbs et al. May 1991 A
5101402 Chiu et al. Mar 1992 A
5117225 Wang May 1992 A
5210789 Jeffus et al. May 1993 A
5239460 LaRoche Aug 1993 A
5241625 Epard et al. Aug 1993 A
5267865 Lee et al. Dec 1993 A
5299260 Shaio Mar 1994 A
5311422 Loftin et al. May 1994 A
5315711 Barone et al. May 1994 A
5317628 Misholi et al. May 1994 A
5347306 Nitta Sep 1994 A
5388252 Dreste et al. Feb 1995 A
5396371 Henits et al. Mar 1995 A
5432715 Shigematsu et al. Jul 1995 A
5465286 Clare et al. Nov 1995 A
5475625 Glaschick Dec 1995 A
5485569 Goldman et al. Jan 1996 A
5491780 Fyles et al. Feb 1996 A
5499291 Kepley Mar 1996 A
5526407 Russell et al. Jun 1996 A
5535256 Maloney et al. Jul 1996 A
5572652 Robusto et al. Nov 1996 A
5577112 Cambray et al. Nov 1996 A
5590171 Howe et al. Dec 1996 A
5597312 Bloom et al. Jan 1997 A
5619183 Ziegra et al. Apr 1997 A
5696906 Peters et al. Dec 1997 A
5717879 Moran et al. Feb 1998 A
5721842 Beasley et al. Feb 1998 A
5742670 Bennett Apr 1998 A
5748499 Trueblood May 1998 A
5778182 Cathey et al. Jul 1998 A
5784452 Carney Jul 1998 A
5790798 Beckett, II et al. Aug 1998 A
5796952 Davis et al. Aug 1998 A
5809247 Richardson et al. Sep 1998 A
5809250 Kisor Sep 1998 A
5825869 Brooks et al. Oct 1998 A
5835572 Richardson, Jr. et al. Nov 1998 A
5862330 Anupam et al. Jan 1999 A
5864772 Alvarado et al. Jan 1999 A
5884032 Bateman et al. Mar 1999 A
5907680 Nielsen May 1999 A
5918214 Perkowski Jun 1999 A
5923746 Baker et al. Jul 1999 A
5933811 Angles et al. Aug 1999 A
5944791 Scherpbier Aug 1999 A
5948061 Merriman et al. Sep 1999 A
5958016 Chang et al. Sep 1999 A
5964836 Rowe et al. Oct 1999 A
5978648 George et al. Nov 1999 A
5982857 Brady Nov 1999 A
5987466 Greer et al. Nov 1999 A
5990852 Szamrej Nov 1999 A
5991373 Pattison et al. Nov 1999 A
5991796 Anupam et al. Nov 1999 A
6005932 Bloom Dec 1999 A
6009429 Greer et al. Dec 1999 A
6014134 Bell et al. Jan 2000 A
6014647 Nizzari et al. Jan 2000 A
6018619 Allard et al. Jan 2000 A
6035332 Ingrassia et al. Mar 2000 A
6038544 Machin et al. Mar 2000 A
6039575 L'Allier et al. Mar 2000 A
6057841 Thurlow et al. May 2000 A
6058163 Pattison et al. May 2000 A
6061798 Coley et al. May 2000 A
6072860 Kek et al. Jun 2000 A
6076099 Chen et al. Jun 2000 A
6078894 Clawson et al. Jun 2000 A
6091712 Pope et al. Jul 2000 A
6108711 Beck et al. Aug 2000 A
6122665 Bar et al. Sep 2000 A
6122668 Teng et al. Sep 2000 A
6130668 Stein Oct 2000 A
6138139 Beck et al. Oct 2000 A
6144991 England Nov 2000 A
6146148 Stuppy Nov 2000 A
6151622 Fraenkel et al. Nov 2000 A
6154771 Rangan et al. Nov 2000 A
6157808 Hollingsworth Dec 2000 A
6171109 Ohsuga Jan 2001 B1
6182094 Humpleman et al. Jan 2001 B1
6195679 Bauersfeld et al. Feb 2001 B1
6201948 Cook et al. Mar 2001 B1
6211451 Tohgi et al. Apr 2001 B1
6225993 Lindblad et al. May 2001 B1
6230197 Beck et al. May 2001 B1
6236977 Verba et al. May 2001 B1
6244758 Solymar et al. Jun 2001 B1
6282548 Burner et al. Aug 2001 B1
6286030 Wenig et al. Sep 2001 B1
6286046 Bryant Sep 2001 B1
6288753 DeNicola et al. Sep 2001 B1
6289340 Puram et al. Sep 2001 B1
6301462 Freeman et al. Oct 2001 B1
6301573 McIlwaine et al. Oct 2001 B1
6324282 McIllwaine et al. Nov 2001 B1
6347374 Drake et al. Feb 2002 B1
6351467 Dillon Feb 2002 B1
6353851 Anupam et al. Mar 2002 B1
6360250 Anupam et al. Mar 2002 B1
6370574 House et al. Apr 2002 B1
6404857 Blair et al. Jun 2002 B1
6411989 Anupam et al. Jun 2002 B1
6418471 Shelton et al. Jul 2002 B1
6459787 McIllwaine et al. Oct 2002 B2
6487195 Choung et al. Nov 2002 B1
6493758 McLain Dec 2002 B1
6502131 Vaid et al. Dec 2002 B1
6510220 Beckett, II et al. Jan 2003 B1
6535909 Rust Mar 2003 B1
6542602 Elazar Apr 2003 B1
6546405 Gupta et al. Apr 2003 B2
6560328 Bondarenko et al. May 2003 B1
6583806 Ludwig et al. Jun 2003 B2
6606657 Zilberstein et al. Aug 2003 B1
6665644 Kanevsky et al. Dec 2003 B1
6674447 Chiang et al. Jan 2004 B1
6683633 Holtzblatt et al. Jan 2004 B2
6697858 Ezerzer et al. Feb 2004 B1
6724887 Eilbacher et al. Apr 2004 B1
6738456 Wrona et al. May 2004 B2
6757361 Blair et al. Jun 2004 B2
6772396 Cronin et al. Aug 2004 B1
6775377 McIllwaine et al. Aug 2004 B2
6792575 Samaniego et al. Sep 2004 B1
6810414 Brittain Oct 2004 B1
6820083 Nagy et al. Nov 2004 B1
6823384 Wilson et al. Nov 2004 B1
6870916 Henrikson et al. Mar 2005 B2
6901438 Davis et al. May 2005 B1
6959078 Eilbacher et al. Oct 2005 B1
6965886 Govrin et al. Nov 2005 B2
7076051 Brown et al. Jul 2006 B2
7295970 Gorin et al. Nov 2007 B1
20010000962 Rajan May 2001 A1
20010032335 Jones Oct 2001 A1
20010043697 Cox et al. Nov 2001 A1
20020038363 MacLean Mar 2002 A1
20020052948 Baudu et al. May 2002 A1
20020065911 Von Klopp et al. May 2002 A1
20020065912 Catchpole et al. May 2002 A1
20020128925 Angeles Sep 2002 A1
20020143925 Pricer et al. Oct 2002 A1
20020165954 Eshghi et al. Nov 2002 A1
20030055883 Wiles et al. Mar 2003 A1
20030079020 Gourraud et al. Apr 2003 A1
20030144900 Whitmer Jul 2003 A1
20030154240 Nygren et al. Aug 2003 A1
20040100507 Hayner et al. May 2004 A1
20040165717 Mcllwaine et al. Aug 2004 A1
20040249650 Freedman et al. Dec 2004 A1
20050013560 Mazotti et al. Jan 2005 A1
20060198504 Shemisa et al. Sep 2006 A1
20060265089 Conway et al. Nov 2006 A1
20060289622 Khor et al. Dec 2006 A1
20070297577 Wyss Dec 2007 A1
20080037719 Doren Feb 2008 A1
20080260122 Conway et al. Oct 2008 A1
Foreign Referenced Citations (6)
Number Date Country
0453128 Oct 1991 EP
0773687 May 1997 EP
0989720 Mar 2000 EP
2369263 May 2002 GB
WO 9843380 Nov 1998 WO
WO 0016207 Mar 2000 WO
Related Publications (1)
Number Date Country
20080082340 A1 Apr 2008 US