As communication technologies have improved, businesses and individuals have desired greater functionality in their communication networks. As a nonlimiting example, many businesses have created call center infrastructures in which a customer or other user can call to receive information related to the business. As customers call into the call center, the customer may be connected with a customer service representative to provide the desired information. Depending on the time of call, the subject matter of the call, and/or other data, the customer may be connected with different customer service representatives. As such, depending on these and/or other factors, the customer may be provided with varying levels of customer service with respect to the interaction with the customer service representative. Because most businesses desire to provide the highest possible quality of customer service, many businesses have turned to recording the communication between the customer and the customer service representative. While recording this data has proven beneficial in many cases, many businesses receive call volumes that inhibit the business from reviewing all of the call data received.
As such, many businesses have turned to speech recognition technology to capture the recorded communication data and thereby provide a textual document for review of the communication. While textual documentation of a communication has also proven beneficial, similar issues may exist in that the sheer amount of data may be such that review of the data is impractical.
To combat this problem, a number of businesses have also implemented analytics technologies to analyze the speech-recognized communications. One such technology that has emerged includes large vocabulary continuous speech recognition (LVCSR). LVCSR technologies often convert received audio from the communications into an English translation of the communication in a textual document. From the textual document, analytics may be provided to determine various data related to the communication. Additionally, phonetic speech recognition may be utilized for capturing the communication data.
While these and technologies may provide a mechanism for capturing communication data, oftentimes, the shear amount of data for processing may consume extensive hardware resources. As such, a solution to increase speed and/or reduce resource consumption is desired.
Included are embodiments for multi-pass analytics. At least one embodiment of a method includes receiving audio data associated with a communication, performing first tier speech to text analytics on the received audio data, and performing second tier speech to text analytics on the received audio.
Also included are embodiments of a system for multi-pass analytics. At least one embodiment of a system includes a receiving component configured to receive audio data associated with a communication and a first tier speech to text analytics component configured to perform first tier speech to text analytics on the received audio data. Some embodiments include a second tier speech to text analytics component configured to, in response to determining, perform second tier speech to text analytics on the received audio.
Also included are embodiments of a computer readable medium for multi-pass analytics. At least one embodiment includes receiving logic configured to receive audio data associated with a communication and first tier speech to text analytics logic configured to perform first tier speech to text analytics on the received audio data. Some embodiments include second tier speech to text analytics logic configured to, in response to determining, perform second tier speech to text analytics on the received audio.
Other systems, methods, features, and advantages of this disclosure will be or become apparent to one with skill in the art upon examination of the following drawings and detailed description. It is intended that all such additional systems, methods, features, and advantages be included within this description, be within the scope of the present disclosure.
Many aspects of the disclosure can be better understood with reference to the following drawings. The components in the drawings are not necessarily to scale, emphasis instead being placed upon clearly illustrating the principles of the present disclosure. Moreover, in the drawings, like reference numerals designate corresponding parts throughout the several views. While several embodiments are described in connection with these drawings, there is no intent to limit the disclosure to the embodiment or embodiments disclosed herein. On the contrary, the intent is to cover all alternatives, modifications, and equivalents.
Included are embodiments for increasing the speed of speech to text conversion and related analytics. More specifically, in at least one embodiment, first tier speech to text analytics and second tier speech to text analytics are used. In other embodiments, a first tier may be configured for speech to text conversion and a second tier may be configured for speech to text analytics. Other embodiments are also included, as discussed with reference to the drawings.
While in some configurations, an audio recording can be provided to an analyst to determine the quality of customer service, some embodiments may include a text to voice conversion of the communication. Large Vocabulary Continuous Speech Recognition (LVCSR) may be utilized to create an English translated textual document associated with the communication. While an LVCSR speech recognized textual document may provide enhanced searching capabilities related to the communication, depending on an accuracy threshold, LVCSR technologies may be slow in execution. Similarly, in many phonetic technologies for speech recognition, processing of search functions associated with the communication may be slow.
Additionally, while a user can send a communication request via communication device 104, some embodiments may provide that a user utilizing computing device 108 may initiate a communication to call center 106 via network 100. In such configurations, a user may utilize a soft phone and/or other communications logic provided for initiating and facilitating a communication.
One souls also note that a call center can include, but is not limited to, outsourced contact centers, outsourced customer relationship management, customer relationship management, voice of the customer, customer interaction, contact center, multi-media contact center, remote office, distributed enterprise, work-at-home agents, remote agents, branch office, back office, performance optimization, workforce optimization, hosted contact centers, and speech analytics, for example.
Call center 106 may also include an analytic scorecard 220, a quality management (QM) evaluations component 222, and enterprise reporting component 224, and a speech and replay component 226. An agent 228 can utilize one or more of the components of call center 106 to facilitate a communication with a caller on communications device 104. Similarly, an analyst 230 can utilize one or more components of call center 106 to analyze the quality of the communications between the agent 228 and the caller associated with communications device 104. A supervisor 232 may also have access to components of call center 106 to oversee the agent 228 and/or the analyst 230 and their interactions with a caller on communications device 104.
Additionally, a recognition engine cluster 202 may be coupled to call center 106 directly and/or via network 100. Recognition engine cluster 202 may include one or more servers that may provide speech recognition functionality to call center 106. In operation, a communication between a caller on communications device 104 and an agent 228, via network 100, may first be received by a recorder subsystem component 204. Recorder subsystem component 204 may record the communications in an audio format. The recorder audio may then be sent to an extraction filtering component 206 which may be configured to extract the dialogue (e.g., remove noise and other unwanted sounds) from the recording. The recorded communication can then be sent to a speech-processing framework component 208 for converting the recorded audio communication into a textual format. Conversion of the audio into a textual format may be facilitated by a recognition engine cluster 202, however this is not a requirement. Regardless, conversion from the audio format to a textual format may be facilitated via LVCSR speech recognition technologies and/or phonetic speech recognition technologies, as discussed in more detail below.
Upon conversion from audio to a textual format, data related to the communication may be provided to advanced data analytics (pattern recognition) component 218. Advanced data analytics component 218 may be configured to provide analysis associated with the speech to text converted communication to determine the quality of customer service provided to the caller of communications device 104. Advanced data analytics component 218 may utilize atlas component 210 for facilitation of this analysis. More specifically, atlas component 210 may include a speech package component 212 that may be configured to analyze various patterns in the speech of the caller of communications device 104. Similarly, desktop event component 214 may be configured to analyze one or more actions that the user of communications device takes on their communications device 104. More specifically, a network 100 may facilitate communications in an IP network. As such, communications device 104 may facilitate both audio and/or data communications that may include audio, video, images, and/or other data. Additionally, advanced data analytics component 218 may utilize an actions package 216 to determine various components of the interaction between agent 228 and the caller of communications device 104. Advanced data analytics component 218 may then make a determination based on predetermined criteria of the quality of call service provided by agent 220.
Advanced data analytics component 218 may then facilitate creation of an analytic scorecard 220 and provide enterprise reporting 224. Additionally, call center may provide quality management evaluations 222, as well as speech and replay communications 226. This data may be viewed by an agent 228, an analyst 230, and/or a supervisor 232. Additionally, as discussed in more detail below, an analyst 230 may further analyze the data to provide a basis for advanced data analytics component 218 to determine the quality of customer service.
The processor 382 can be any custom made or commercially available processor, a central processing unit (CPU), an auxiliary processor among several processors associated with the computing device 104, a semiconductor based microprocessor (in the form of a microchip or chip set), a macroprocessor, or generally any device for executing software instructions.
The volatile and nonvolatile memory 384 can include any one or combination of volatile memory elements (e.g., random access memory (RAM, such as DRAM, SRAM, SDRAM, VRAM, etc.)) and nonvolatile memory elements (e.g., ROM, hard drive, tape, CD-ROM, etc.). Moreover, the memory 384 may incorporate electronic, magnetic, optical, and/or other types of storage media. Note that the volatile and nonvolatile memory 384 can also have a distributed architecture, where various components are situated remotely from one another, but can be accessed by the processor 382.
The software in volatile and nonvolatile memory 384 may include one or more separate programs, each of which includes an ordered listing of executable instructions for implementing logical functions. In the example of
A system component embodied as software may also be construed as a source program, executable program (object code), script, or any other entity comprising a set of instructions to be performed. When constructed as a source program, the program is translated via a compiler, assembler, interpreter, or the like, which may or may not be included within the volatile and nonvolatile memory 384, so as to operate properly in connection with the Operating System 386.
The Input/Output devices that may be coupled to system I/O Interface(s) 396 may include input devices, for example but not limited to, a keyboard, mouse, scanner, microphone, camera, proximity device, etc. Further, the Input/Output devices may also include output devices, for example but not limited to, a printer, display, etc. Finally, the Input/Output devices may further include devices that communicate both as inputs and outputs, for instance but not limited to, a modulator/demodulator (modem for accessing another device, system, or network), a radio frequency (RF) or other transceiver, a telephonic interface, a bridge, a router, etc. Similarly, network interface 388, which is coupled to local interface 392 can be configured to communication with a communications network, such as the network from
If the computing device 104 is a personal computer, workstation, or the like, the software in the volatile and nonvolatile memory 384 may further include a basic input output system (BIOS) (omitted for simplicity). The BIOS is a set of software routines that initialize and test hardware at startup, start the Operating System 386, and support the transfer of data among the hardware devices. The BIOS is stored in ROM so that the BIOS can be executed when the computing device 104 is activated.
When the computing device 104 is in operation, the processor 382 can be configured to execute software stored within the volatile and nonvolatile memory 384, to communicate data to and from the volatile and nonvolatile memory 384, and to generally control operations of the computing device 104 pursuant to the software. Software in memory, in whole or in part, is read by the processor 382, perhaps buffered within the processor 382, and then executed. Additionally, one should note that while the above description is directed to a advanced data analytics component 218, other devices (such as communications device 104, computing device 108, call center 106, and/or other components) can also include the components and/or functionality described in
One should also note that advanced data analytics component 218 can be configured with one or more of the components and/or logic described above with respect to analytics component 218. Additionally, analytics component 218, communications device 104, computing device 108, and/or other components of call center 106 can include voice recognition logic, voice-to-text logic, text-to-voice logic, etc. (or any permutation thereof), as well as other components and/or logic for facilitating the functionality described herein. Additionally, in some exemplary embodiments, one or more of these components can include the functionality described with respect to analytics component 218.
More specifically, ingest audio component 408 can be configured to facilitate the creation of a phonetic transcript with one or more phonemes that occur in the communication. One embodiment of a representation of the one or more phonemes can include the designation of International Phonetic Alphabet (IPA) which may be utilized for computer use using the ISO10646 standard (UNICODE). Ingest audio component 408 can then create the phonetic transcript 412.
The phonetic transcript 412 can then be sent to a search system 420, which is part of a search component 416. The search system can also receive vocabulary and rules as designated by an analyst, such as analyst 230 from
As a nonlimiting example, referring to
The phonetic transcript can then be sent to a search component 416, which includes a search system 420. The search system 420 can utilize vocabulary and rules component 418, as well as receive the search terms 414. As indicated above, the search term “brown fox” can be a desired term to be found in a communication. The search system 420 can then search the phonetic transcript for the term “brown fox.” As the phonetic transcript may not include an English translation of the audio recording, vocabulary and rules component 418 may be configured to provide a correlation between the search term 414 (which may be provided in English) and the phonetic representation of the desired search terms.
If the term “brown fox” appears in the phonetic transcript 412, a signal and/or scorecard can be provided to an analyst 230 to determine the quality of customer service provided by agent 228. Additionally, some embodiments can be configured to provide information to analyst 230 in the event that the term “brown fox” does not appear in the communication. Similarly, other search terms and/or search criteria may be utilized to provide data to analyst 230. Further description of phonetic speech to text conversion and analytics is disclosed in U.S. application Ser. No. 11/540,281, entitled “Speech Analysis Using Statistical Learning,” which is hereby incorporated by reference in its entirety.
One should note that the nonlimiting example of
In at least one embodiment, the system of
Similarly, some embodiments may be configured such that first tier recognition server 606 is configured to provide a precursory speech to text conversion and/or analytics. Upon recognition of a desired search term associated with the communication, first tier recognition server 606 can provide at least a portion of the communication data to second tier recognition server 608. Second tier recognition server 608 may be configured to provide a more thorough analysis (and/or conversion) of the data. As first tier server 606 may be configured to process at least a portion of the received data and send at least a portion of that data to second tier server 608, network performance may improve.
While first tier recognition server 606 is illustrated as being directly coupled to second tier recognition server 608, this is a nonlimiting example. More specifically, in at least one embodiment, first tier recognition server is coupled to network 100 and second tier recognition server is also coupled to network 100. First tier recognition server 606 may be separately located from second tier recognition server and may facilitate communications with second tier recognition server 608 via network 100. Additionally, while first tier recognition server 606 and second tier recognition server 608 are illustrated as separate components, this is also a nonlimiting example. In at least one embodiment, the functionality described with respect to first tier recognition server 606 and second tier recognition server 608 may be provided in a single component for providing the desired functionality.
First tier recognition server 706 may be configured to provide one or more speech recognition and/or analytics services. As a nonlimiting example, first tier recognition server 706a may be configured to determine speaker identification associated with the communication. Similarly, first tier server 706b may be configured to provide speaker verification associated with the communication. First tier server 706c may be configured to determine speaker emotion. Similarly, second tier speech recognition service 708a may be configured to exclusively serve first tier recognition server 706a, however this is a nonlimiting example. More specifically, second tier speech recognition server 708a may be configured as a speaker identification determination server to receive data from first recognition server 706a. In operation, audio data may be sent to first tier speech recognition server 706a, which may be a phonetic speech recognition server. First tier speech recognition server 706a may be configured to determine at least one characteristic associated with the audio data to determine whether speaker identification may be determined. If a determination is made that speaker identification can be determined, first tier speech recognition service 706a may send at least a portion of the received audio data (which may be converted into a phonetic transcript and/or other form) to a second tier recognition server 708a. Second tier speech recognition server 708a may be configured to fully analyze the received data to determine the identification of the speaker.
While the nonlimiting example discussed above indicates that the second tier speech recognition server 708a is a dedicated server for first tier speech recognition server 706a, this is a nonlimiting example. More specifically, in at least one nonlimiting example, second tier recognition servers may serve one or more of the first tier speech recognition servers 706. Similarly, some embodiments can be configured such that first tier recognition server 706 may be configured to provide the initial speech recognition functionality while second tier speech recognition server 708 may be configured to provide more specific services. In this nonlimiting example, first tier speech recognition servers 706a, 706b, 706c may be configured to provide a speech to text conversion associated with received audio data. Upon conversion, first tier speech recognition servers 706a, 706b, and 706c can make a determination as to the desired analytics for the associated communication. Upon determining the desired analytics, first tier speech recognition server 706 can send the phonetic data to a second tier speech recognition server 708 associated with the desired analytic.
More specifically, if second tier speech recognition server 708a is a speaker identification server, one or more of the first tier recognition servers 706a, 706b, 706c can send data to second tier speech recognition server 708a upon determination that a speaker identification is required. Similarly, if second tier speech recognition server 708b is configured for speaker verification, speech recognition servers 706 may be configured to send communication data to second tier speech recognition server 708b. Other configurations are also included.
Also included in the nonlimiting example of
In operation, first tier speech recognition server 806 may be configured to receive raw data associated with a communication. First tier speech recognition server 806 may then perform expedited speech recognition services on the received data. Second tier speech recognition server 808 may include more thorough speech recognition functionality which may be slower in operation than first tier speech recognition server 806, however second tier server 808 may provide greater accuracy related to received data. Additionally, second tier speech recognition server 808 may make a determination whether a third tier speech recognition server 810 may be utilized.
Third tier speech recognition server 810 may be configured to provide services different than that of second tier speech recognition server 808. As a nonlimiting example, second tier speech recognition server 808 may be configured to determined speaker confidence associated with received audio data, while a third tier speech recognition server may be configured to determine speaker emotion associated with the received audio. As such, if information regarding both speaker emotion and speaker confidence is desired, second tier speech recognition server 808 and third tier speech recognition server 810 (as well as first tier speech recognition server 806) may be utilized.
Call center 106 may then convert the received audio into a textual transcript (e.g., a phonetic transcript and/or a spoken language transcript and/or other type of transcript), as illustrated in block 936. Call center 106 may then determine whether the audio potentially include the recognition criteria (block 938). If the received audio data does not include the recognition criteria, the process may end. If, however, first tier speech recognition server determines that the audio potentially include the recognition criteria, first tier speech recognition server can send at least a portion of the audio (which may be converted to a phonetic and/or other transcript) to second tier speech recognition server (block 938). The flowchart then proceeds to jump block 940, which is continued in
As discussed above, second tier speech recognition server may provide a more detailed speech recognition analysis of the audio data received. Similarly, some embodiments may be configured to provide a specific speech recognition analysis task such as speaker identification, speaker verification, speaker emotion, speaker confidence, and/or other types of analysis.
As illustrated in this nonlimiting example, upon determination that the received audio contains one or more attributes associated with the determined speech criteria, according to the determined recognition criteria, first tier speech recognition server can send at least a portion of the data to the second tier speech recognition server. As such, full analysis of the received audio may be expedited.
It should be noted that speech analytics (i.e., the analysis of recorded speech or real-time speech) can be used to perform a variety of functions, such as automated call evaluation, call scoring, quality monitoring, quality assessment and compliance/adherence. By way of example, speech analytics can be used to compare a recorded interaction to a script (e.g., a script that the agent was to use during the interaction). In other words, speech analytics can be used to measure how well agents adhere to scripts, identify which agents are “good” sales people and which ones need additional training. As such, speech analytics can be used to find agents who do not adhere to scripts. Yet in another example, speech analytics can measure script effectiveness, identify which scripts are effective and which are not, and find, for example, the section of a script that displeases or upsets customers (e.g., based on emotion detection). As another example, compliance with various policies can be determined. Such may be in the case of, for example, the collections industry where it is a highly regulated business and agents must abide by many rules. The speech analytics of the present disclosure may identify when agents are not adhering to their scripts and guidelines. This can improve collection effectiveness and reduce corporate liability and risk.
In this regard, various types of recording components can be used to facilitate speech analytics. Specifically, such recording components can perform one or more various functions such as receiving, capturing, intercepting and tapping of data. This can involve the use of active and/or passive recording techniques, as well as the recording of voice and/or screen data.
It should also be noted that speech analytics can be used in conjunction with such screen data (e.g., screen data captured from an agent's workstation/PC) for evaluation, scoring, analysis, adherence and compliance purposes, for example. Such integrated functionalities improve the effectiveness and efficiency of, for example, quality assurance programs. For example, the integrated function can help companies to locate appropriate calls (and related screen interactions) for quality monitoring and evaluation. This type of “precision” monitoring improves the effectiveness and productivity of quality assurance programs.
Another aspect that can be accomplished involves fraud detection. In this regard, various manners can be used to determine the identity of a particular speaker. In some embodiments, speech analytics can be used independently and/or in combination with other techniques for performing fraud detection. Specifically, some embodiments can involve identification of a speaker (e.g., a customer) and correlating this identification with other information to determine whether a fraudulent claim for example is being made. If such potential fraud is identified, some embodiments can provide an alert. For example, the speech analytics of the present disclosure may identify the emotions of callers. The identified emotions can be used in conjunction with identifying specific concepts to help companies spot either agents or callers/customers who are involved in fraudulent activities. Referring back to the collections example outlined above, by using emotion and concept detection, companies can identify which customers are attempting to mislead collectors into believing that they are going to pay. The earlier the company is aware of a problem account, the more recourse options they will have. Thus, the speech analytics of the present disclosure can function as an early warning system to reduce losses.
Additionally, included in this disclosure are embodiments of integrated workforce optimization platforms, as discussed in U.S. application Ser. No. 11/359,356, filed on Feb. 22, 2006, entitled “Systems and Methods for Workforce Optimization,” which is hereby incorporated by reference in its entirety. At least one embodiment of an integrated workforce optimization platform integrates: (1) Quality Monitoring/Call Recording—voice of the customer; the complete customer experience across multimedia touch points; (2) Workforce Management—strategic forecasting and scheduling that drives efficiency and adherence, aids in planning, and helps facilitate optimum staffing and service levels; (3) Performance Management—key performance indicators (KPIs) and scorecards that analyze and help identify synergies, opportunities and improvement areas; (4) e-Learning—training, new information and protocol disseminated to staff, leveraging best practice customer interactions and delivering learning to support development; and/or (5) Analytics—deliver insights from customer interactions to drive business performance. By way of example, the integrated workforce optimization process and system can include planning and establishing goals—from both an enterprise and center perspective—to ensure alignment and objectives that complement and support one another. Such planning may be complemented with forecasting and scheduling of the workforce to ensure optimum service levels. Recording and measuring performance may also be utilized, leveraging quality monitoring/call recording to assess service quality and the customer experience.
The embodiments disclosed herein can be implemented in hardware, software, firmware, or a combination thereof. At least one embodiment, disclosed herein is implemented in software and/or firmware that is stored in a memory and that is executed by a suitable instruction execution system. If implemented in hardware, as in an alternative embodiment disclosed herein can be implemented with any or a combination of the following technologies: a discrete logic circuit) having logic gates for implementing logic functions upon data signals, an application specific integrated circuit (ASIC) having appropriate combinational logic gates, a programmable gate array(s) (PGA), a field programmable gate array (FPGA), etc.
One should note that the flowcharts included herein show the architecture, functionality, and operation of a possible implementation of software. In this regard, each block can be interpreted to represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that in some alternative implementations, the functions noted in the blocks may occur out of the order and/or not at all. For example, two blocks shown in succession may in fact be executed substantially concurrently or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.
One should note that any of the programs listed herein, which can include an ordered listing of executable instructions for implementing logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. In the context of this document, a “computer-readable medium” can be any means that can contain, store, communicate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. The computer readable medium can be, for example but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device. More specific examples (a nonexhaustive list) of the computer-readable medium could include an electrical connection (electronic) having one or more wires, a portable computer diskette (magnetic), a random access memory (RAM) (electronic), a read-only memory (ROM) (electronic), an erasable programmable read-only memory (EPROM or Flash memory) (electronic), an optical fiber (optical), and a portable compact disc read-only memory (CDROM) (optical). In addition, the scope of the certain embodiments of this disclosure can include embodying the functionality described in logic embodied in hardware or software-configured mediums.
One should also note that conditional language, such as, among others, “can,” “could,” “might,” or “may,” unless specifically stated otherwise, or otherwise understood within the context as used, is generally intended to convey that certain embodiments include, while other embodiments do not include, certain features, elements and/or steps. Thus, such conditional language is not generally intended to imply that features, elements and/or steps are in any way required for one or more particular embodiments or that one or more particular embodiments necessarily include logic for deciding, with or without user input or prompting, whether these features, elements and/or steps are included or are to be performed in any particular embodiment.
It should be emphasized that the above-described embodiments are merely possible examples of implementations, merely set forth for a clear understanding of the principles of this disclosure. Many variations and modifications may be made to the above-described embodiments without departing substantially from the spirit and principles of the disclosure. All such modifications and variations are intended to be included herein within the scope of this disclosure.
This application is a continuation application of and claims priority to U.S. patent application Ser. No. 11/924,201, filed on Oct. 25, 2007, which is a continuation of U.S. patent application Ser. No. 11/540,322, titled MULTI-PASS SPEECH ANALYTICS, filed on Sep. 29, 2006, and which is hereby incorporated by reference in its entirety. No new matter has been added.
Number | Name | Date | Kind |
---|---|---|---|
5740318 | Naito et al. | Apr 1998 | A |
5757644 | Jorgensen et al. | May 1998 | A |
5918222 | Fukui et al. | Jun 1999 | A |
6100891 | Thorne | Aug 2000 | A |
6324282 | McIllwaine et al. | Nov 2001 | B1 |
6721734 | Subasic et al. | Apr 2004 | B1 |
6751614 | Rao | Jun 2004 | B1 |
6823054 | Suhm et al. | Nov 2004 | B1 |
7039166 | Peterson et al. | May 2006 | B1 |
7346509 | Gallino | Mar 2008 | B2 |
7451085 | Rose et al. | Nov 2008 | B2 |
7577246 | Idan et al. | Aug 2009 | B2 |
7613717 | Reed et al. | Nov 2009 | B1 |
7664747 | Petras et al. | Feb 2010 | B2 |
7720214 | Ricketts | May 2010 | B2 |
7865510 | Hillary et al. | Jan 2011 | B2 |
7913063 | Lyerly | Mar 2011 | B1 |
8023636 | Koehler et al. | Sep 2011 | B2 |
8054964 | Flockhart et al. | Nov 2011 | B2 |
8055503 | Scarano et al. | Nov 2011 | B2 |
8060364 | Bachar et al. | Nov 2011 | B2 |
8108237 | Bourne et al. | Jan 2012 | B2 |
8200527 | Thompson et al. | Jun 2012 | B1 |
8219555 | Mianji | Jul 2012 | B1 |
8396741 | Kannan et al. | Mar 2013 | B2 |
8417713 | Blair-Goldensohn et al. | Apr 2013 | B1 |
8463595 | Rehling et al. | Jun 2013 | B1 |
8463606 | Scott et al. | Jun 2013 | B2 |
8504371 | Vacek et al. | Aug 2013 | B1 |
8531501 | Portman et al. | Sep 2013 | B2 |
8583434 | Gallino | Nov 2013 | B2 |
8626753 | Aggarwal et al. | Jan 2014 | B1 |
8805717 | Fleming et al. | Aug 2014 | B2 |
8965765 | Zweig et al. | Feb 2015 | B2 |
20020002460 | Pertrushin | Jan 2002 | A1 |
20020156626 | Hutchison | Oct 2002 | A1 |
20030055654 | Oudeyer | Mar 2003 | A1 |
20030088421 | Maes et al. | May 2003 | A1 |
20030154072 | Young et al. | Aug 2003 | A1 |
20040068406 | Maekawa et al. | Apr 2004 | A1 |
20040098265 | Kelly et al. | May 2004 | A1 |
20050170326 | Koehler et al. | Aug 2005 | A1 |
20050216269 | Scahill et al. | Sep 2005 | A1 |
20050221268 | Chaar et al. | Oct 2005 | A1 |
20060074689 | Cosatto et al. | Apr 2006 | A1 |
20060080107 | Hill et al. | Apr 2006 | A1 |
20060179064 | Paz et al. | Aug 2006 | A1 |
20060235690 | Tomasic et al. | Oct 2006 | A1 |
20070011005 | Morrison et al. | Jan 2007 | A1 |
20070016580 | Mann et al. | Jan 2007 | A1 |
20070043608 | May et al. | Feb 2007 | A1 |
20070071206 | Gainsboro et al. | Mar 2007 | A1 |
20070198249 | Adachi et al. | Aug 2007 | A1 |
20070198330 | Korenblit et al. | Aug 2007 | A1 |
20070211881 | Parker-Stephen | Sep 2007 | A1 |
20070287477 | Tran | Dec 2007 | A1 |
20080022211 | Jones et al. | Jan 2008 | A1 |
20080080698 | Williams et al. | Apr 2008 | A1 |
20080082329 | Watson | Apr 2008 | A1 |
20080082330 | Blair | Apr 2008 | A1 |
20080082341 | Blair | Apr 2008 | A1 |
20080097985 | Olstad et al. | Apr 2008 | A1 |
20080177538 | Roy et al. | Jul 2008 | A1 |
20080195385 | Pereg et al. | Aug 2008 | A1 |
20080215543 | Huang et al. | Sep 2008 | A1 |
20080235018 | Eggen et al. | Sep 2008 | A1 |
20080249764 | Huang et al. | Oct 2008 | A1 |
20080281581 | Henshaw et al. | Nov 2008 | A1 |
20090087822 | Stanton et al. | Apr 2009 | A1 |
20090092241 | Minert et al. | Apr 2009 | A1 |
20090119268 | Bandaru et al. | May 2009 | A1 |
20090138262 | Agarwal et al. | May 2009 | A1 |
20090222551 | Neely et al. | Sep 2009 | A1 |
20090228264 | Williams et al. | Sep 2009 | A1 |
20090228428 | Dan et al. | Sep 2009 | A1 |
20090248399 | Au | Oct 2009 | A1 |
20090258333 | Yu | Oct 2009 | A1 |
20090265332 | Mushtaq et al. | Oct 2009 | A1 |
20090292538 | Barnish | Nov 2009 | A1 |
20090313016 | Cevik et al. | Dec 2009 | A1 |
20090327279 | Adachi et al. | Dec 2009 | A1 |
20100005081 | Bennett | Jan 2010 | A1 |
20100076765 | Zweig et al. | Mar 2010 | A1 |
20100091954 | Dayanidhi et al. | Apr 2010 | A1 |
20100098225 | Ashton et al. | Apr 2010 | A1 |
20100104086 | Park | Apr 2010 | A1 |
20100104087 | Byrd et al. | Apr 2010 | A1 |
20100119053 | Goeldi | May 2010 | A1 |
20100121857 | Elmore et al. | May 2010 | A1 |
20100145940 | Chen et al. | Jun 2010 | A1 |
20100161315 | Melamed et al. | Jun 2010 | A1 |
20100198584 | Habu et al. | Aug 2010 | A1 |
20100246799 | Lubowich et al. | Sep 2010 | A1 |
20100253792 | Kawaguchi et al. | Oct 2010 | A1 |
20100262454 | Sommer et al. | Oct 2010 | A1 |
20100274618 | Byrd et al. | Oct 2010 | A1 |
20100329437 | Jeffs et al. | Dec 2010 | A1 |
20100332287 | Gates et al. | Dec 2010 | A1 |
20110010173 | Scott et al. | Jan 2011 | A1 |
20110055223 | Elmore et al. | Mar 2011 | A1 |
20110078167 | Sundaresan et al. | Mar 2011 | A1 |
20110082874 | Gainsboro et al. | Apr 2011 | A1 |
20110093479 | Fuchs | Apr 2011 | A1 |
20110178803 | Petrushin | Jul 2011 | A1 |
20110191106 | Khor et al. | Aug 2011 | A1 |
20110196677 | Deshmukh et al. | Aug 2011 | A1 |
20110208522 | Pereg et al. | Aug 2011 | A1 |
20110216905 | Gavalda et al. | Sep 2011 | A1 |
20110225115 | Moitra et al. | Sep 2011 | A1 |
20110238670 | Mercuri | Sep 2011 | A1 |
20110246442 | Bartell | Oct 2011 | A1 |
20110249811 | Conway et al. | Oct 2011 | A1 |
20110282661 | Dobry et al. | Nov 2011 | A1 |
20110307257 | Pereg et al. | Dec 2011 | A1 |
20120046938 | Godbole et al. | Feb 2012 | A1 |
20120130771 | Kannan et al. | May 2012 | A1 |
20120131021 | Blair-Goldensohn et al. | May 2012 | A1 |
20120143597 | Mushtaq et al. | Jun 2012 | A1 |
20120215535 | Wasserblat et al. | Aug 2012 | A1 |
20120245942 | Zechner et al. | Sep 2012 | A1 |
20120253792 | Bespalov et al. | Oct 2012 | A1 |
20130018875 | Qiao | Jan 2013 | A1 |
20130204613 | Godbole et al. | Aug 2013 | A1 |
20130297581 | Ghosh et al. | Nov 2013 | A1 |
20130297619 | Chandrasekaran et al. | Nov 2013 | A1 |
20140012863 | Sundaresan et al. | Jan 2014 | A1 |
20140067390 | Webb | Mar 2014 | A1 |
Entry |
---|
Official U.S. Official Action, dated May 22, 2013, issued in related U.S. Appl. No. 13/215,192. |
Number | Date | Country | |
---|---|---|---|
20120026280 A1 | Feb 2012 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 11924201 | Oct 2007 | US |
Child | 13271681 | US | |
Parent | 11540322 | Sep 2006 | US |
Child | 11924201 | US |