Systems and methods for dynamically improving user intelligibility of synthesized speech in a work environment

Information

  • Patent Grant
  • 11817078
  • Patent Number
    11,817,078
  • Date Filed
    Friday, June 2, 2023
    12 months ago
  • Date Issued
    Tuesday, November 14, 2023
    6 months ago
Abstract
A method and apparatus that dynamically adjust operational parameters of a text-to-speech engine in a speech-based system are disclosed. A voice engine or other application of a device provides a mechanism to alter the adjustable operational parameters of the text-to-speech engine. In response to one or more environmental conditions, the adjustable operational parameters of the text-to-speech engine are modified to increase the intelligibility of synthesized speech.
Description
FIELD OF THE INVENTION

Embodiments of the invention relate to speech-based systems, and in particular, to systems, methods, and program products for improving speech cognition in speech-directed or speech-assisted work environments that utilize synthesized speech.


BACKGROUND

Speech recognition has simplified many tasks in the workplace by permitting hands-free communication with a computer as a convenient alternative to communication via conventional peripheral input/output devices. A user may enter data and commands by voice using a device having a speech recognizer. Commands, instructions, or other information may also be communicated to the user by a speech synthesizer. Generally, the synthesized speech is provided by a text-to-speech (TTS) engine. Speech recognition finds particular application in mobile computing environments in which interaction with the computer by conventional peripheral input/output devices is restricted or otherwise inconvenient.


For example, wireless wearable, portable, or otherwise mobile computer devices can provide a user performing work-related tasks with desirable computing and dataprocessing functions while offering the user enhanced mobility within the workplace. One example of an area in which users rely heavily on such speech-based devices is inventory management. Inventory-driven industries rely on computerized inventory management systems for performing various diverse tasks, such as food and retail product distribution, manufacturing, and quality control. An overall integrated management system typically includes a combination of a central computer system for tracking and management, and the people who use and interface with the computer system in the form of order fillers and other users. In one scenario, the users handle the manual aspects of the integrated management system under the command and control of information transmitted from the central computer system to the wireless mobile device and to the user through a speech-driven interface.


As the users process their orders and complete their assigned tasks, a bi-directional communication stream of information is exchanged over a wireless network between users wearing wireless devices and the central computer system. The central computer system thereby directs multiple users and verifies completion of their tasks. To direct the user's actions, information received by each mobile device from the central computer system is translated into speech or voice instructions for the corresponding user. Typically, to receive the voice instructions, the user wears a headset coupled with the mobile device.


The headset includes a microphone for spoken data entry and an ear speaker for audio data feedback. Speech from the user is captured by the headset and converted using speech recognition into data used by the central computer system. Similarly, instructions from the central computer or mobile device in the form of text are delivered to the user as voice prompts generated by the TTS engine and played through the headset speaker. Using such mobile devices, users may perform assigned tasks virtually hands-free so that the tasks are performed more accurately and efficiently.


An illustrative example of a set of user tasks in a speech-directed work environment may involve filling an order, such as filling a load for a particular truck scheduled to depart from a warehouse. The user may be directed to different warehouse areas (e.g., a freezer) in which they will be working to fill the order. The system vocally directs the user to particular aisles, bins, or slots in the work area to pick particular quantities of various items using the TTS engine of the mobile device. The user may then vocally confirm each location and the number of picked items, which may cause the user to receive the next task or order to be picked.


The speech synthesizer or TTS engine operating in the system or on the device translates the system messages into speech, and typically provides the user with adjustable operational parameters or settings such as audio volume, speed, and pitch. Generally, the TTS engine operational settings are set when the user or worker logs into the system, such as at the beginning of a shift. The user may walk though a number of different menus or selections to control how the TTS engine will operate during their shift. In addition to speed, pitch, and volume, the user will also generally select the TTS engine for their native tongue, such as English or Spanish, for example.


As users become more experienced with the operation of the inventory management system, they will typically increase the speech rate and/or pitch of the TTS engine. The increased speech parameters, such as increased speed, allows the user to hear and perform tasks more quickly as they gain familiarity with the prompts spoken by the application. However, there are often situations that may be encountered by the worker that hinder the intelligibility of speech from the TTS engine at the user's selected settings.


For example, the user may receive an unfamiliar prompt or enter into an area of a voice or task application that they are not familiar with. Alternatively, the user may enter a work area with a high ambient noise level or other audible distractions. All these factors degrade the user's ability to understand the TTS engine generated speech. This degradation may result in the user being unable to understand the prompt, with a corresponding increase in work errors, in user frustration, and in the amount of time necessary to complete the task.


With existing systems, it is time consuming and frustrating to be constantly navigating through the necessary menus to change the TTS engine settings in order to address such factors and changes in the work environment. Moreover, since many such factors affecting speech intelligibility are temporary, is becomes particularly time consuming and frustrating to be constantly returning to and navigating through the necessary menus to change the TTS engine back to its previous settings once the temporary environmental condition has passed.


Accordingly, there is a need for systems and methods that improve user cognition of synthesized speech in speech-directed environments by adapting to the user environment. These issues and other needs in the prior art are met by the invention as described and claimed below.


SUMMARY

In an embodiment of the invention, a communication system for a speech-based work environment is provided that includes a text-to-speech engine having one or more adjustable operational parameters. Processing circuitry monitors an environmental condition related to intelligibility of an output of the text-to-speech engine, and modifies the one or more adjustable operational parameters of the text-to-speech engine in response to the monitored environmental condition.


In another embodiment of the invention, a method of communicating in a speech-based environment using a text-to-speech engine is provided that includes monitoring an environmental condition related to intelligibility of an output of the text-to-speech engine. The method further includes modifying one or more adjustable operational parameters of the text-to-speech engine in response to the environmental condition.





BRIEF DESCRIPTION OF THE DRAWINGS

The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and, together with the general description of the invention given above and the detailed description of the embodiments given below, serve to explain the principles of the invention.



FIG. 1 is a diagrammatic illustration of a typical speech-enabled task management system showing a headset and a device being worn by a user performing a task in a speech-directed environment consistent with embodiments of the invention;



FIG. 2 is a diagrammatic illustration of hardware and software components of the task management system of FIG. 1;



FIG. 3 is flowchart illustrating a sequence of operations that may be executed by a software component of FIG. 2 to improve the intelligibility of a system prompt message consistent with embodiments of the invention;



FIG. 4 is flowchart illustrating a sequence of operations that may be executed by a software component of FIG. 2 to improve the intelligibility of a repeated prompt consistent with embodiments of the invention;



FIG. 5 is flowchart illustrating a sequence of operations that may be executed by a software component of FIG. 2 to improve the intelligibility of a prompt played in an adverse environment consistent with embodiments of the invention;



FIG. 6 is a flowchart illustrating a sequence of operations that may be executed by a software component of FIG. 2 to improve the intelligibility of a prompt that contains nonnative words consistent with embodiments of the invention; and



FIG. 7 is a flowchart illustrating a sequence of operations that may be executed by a software component of FIG. 2 to improve the intelligibility of a prompt that contains nonnative words consistent with embodiments of the invention.





It should be understood that the appended drawings are not necessarily to scale, presenting a somewhat simplified representation of various features illustrative of the basic principles of embodiments of the invention. The specific design features of embodiments of the invention as disclosed herein, including, for example, specific dimensions, orientations, locations, and shapes of various illustrated components, as well as specific sequences of operations (e.g., including concurrent and/or sequential operations), will be determined in part by the particular intended application and use environment. Certain features of the illustrated embodiments may have been enlarged or distorted relative to others to facilitate visualization and provide a clear understanding.


DETAILED DESCRIPTION

Embodiments of the invention are related to methods and systems for dynamically modifying adjustable operational parameters of a text-to-speech (TTS) engine running on a device in a speech-based system. To this end, the system monitors one or more environmental conditions associated with a user that are related to or otherwise affect the user intelligibility of the speech or audible output that is generated by the TTS engine. As used herein, environmental conditions are understood to include any operating/work environment conditions or variables which are associated with the user and may affect or provide an indication of the intelligibility of generated speech or audible outputs of the TTS engine for the user. Environmental conditions associated with a user thus include, but are not limited to, user environment conditions such as ambient noise level or temperature, user tasks and speech outputs or prompts or messages associated with the tasks, system events or status, and/or user input such as voice commands or instructions issued by the user. The system may thereby detect or otherwise determine that the operational environment of a device user has certain characteristics, as reflected by monitored environmental conditions. In response to monitoring the environmental conditions or sensing of other environmental characteristics that may reduce the ability of the user to understand TTS voice prompts or other TTS audio data, the system may modify one or more adjustable operational parameters of the TTS engine to improve intelligibility. Once the system operational environment or environmental variable has returned to its original or previous state, a predetermined amount of time has passed, or a particular sensed environmental characteristic ceases or ends, the adjusted or modified operational parameters of the TTS engine may be returned to their original or previous settings. The system may thereby improve the user experience by automatically increasing the user's ability to understand critical speech or spoken data in adverse operational environments and conditions while maintaining the user's preferred settings under normal conditions.



FIG. 1 is an illustration of a user in a typical speech-based system 10 consistent with embodiments of the invention. The system 10 includes a computer device or terminal 12. The device 12 may be a mobile computer device, such as a wearable or portable device that is used for mobile workers. The example embodiments described herein may refer to the device 12 as a mobile device, but the device 12 may also be a stationary computer that a user interfaces with using a mobile headset or device such as a Bluetooth® headset. Bluetooth® is an open wireless standard managed by Bluetooth SIG, Inc. of Kirkland Washington. The device 12 communicates with a user 13 through a headset 14 and may also interface with one or more additional peripheral devices 15, such as a printer or identification code reader. As illustrated, the device 12 and the peripheral device 15 are mobile devices usually worn or carried by the user 13, such as on a belt 16.


In one embodiment of the invention, device 12 may be carried or otherwise transported, such as on the user's waist or forearm, or on a lift truck, harness, or other manner of transportation. The user 13 and the device 12 communicate using speech through the headset 14, which may be coupled to the device 12 through a cable 17 or wirelessly using a suitable wireless interface. One such suitable wireless interface may be Bluetooth®. As noted above, if a wireless headset is used, the device 12 may be stationary, since the mobile worker can move around using just the mobile or wireless headset. The headset 14 includes one or more speakers 18 and one or more microphones 19. The speaker 18 is configured to play TTS audio or audible outputs (such as speech output associated with a speech dialog to instruct the user 13 to perform an action), while the microphone 19 is configured to capture speech input from the user 13 (such as a spoken user response for conversion to machine readable input). The user 13 may thereby interface with the device 12 hands-free through the headset 14 as they move through various work environments or work areas, such as a warehouse.



FIG. 2 is a diagrammatic illustration of an exemplary speech-based system 10 as in FIG. 1 including the device 12, the headset 14, the one or more peripheral devices 15, a network 20, and a central computer system 21. The network 20 operatively connects the device 12 to the central computer system 21, which allows the central computer system 21 to download data and/or user instructions to the device 12. The link between the central computer system 21 and device 12 may be wireless, such as an IEEE 802.11 (commonly referred to as WiFi) link, or may be a cabled link. If device 12 is a mobile device and carried or worn by the user, the link with system 21 will generally be wireless. By way of example, the computer system 21 may host an inventory management program that downloads data in the form of one or more tasks to the device 12 that will be implemented through speech. For example, the data may contain information about the type, number and location of items in a warehouse for assembling a customer order. The data thereby allows the device 12 to provide the user with a series of spoken instructions or directions necessary to complete the task of assembling the order or some other task.


The device 12 includes suitable processing circuitry that may include a processor 22, a memory 24, a network interface 26, an input/output (I/O) interface 28, a headset interface 30, and a power supply 32 that includes a suitable power source, such as a battery, for example, and provides power to the electrical components comprising the device 12. As noted, device 12 may be a mobile device and various examples discussed herein refer to such a mobile device. One suitable device is a TALKMAN® terminal device available from Vocollect, Inc. of Pittsburgh, Pennsylvania. However, device 12 may be a stationary computer that the user interfaces with through a wireless headset, or may be integrated with the headset 14. The processor 22 may consist of one or more processors selected from microprocessors, micro-controllers, digital signal processors, microcomputers, central processing units, field programmable gate arrays, programmable logic devices, state machines, logic circuits, analog circuits, digital circuits, and/or any other devices that manipulate signals (analog and/or digital) based on operational instructions that are stored in memory 24.


Memory 24 may be a single memory device or a plurality of memory devices including but not limited to read-only memory (ROM), random access memory (RAM), volatile memory, non-volatile memory, static random access memory (SRAM), dynamic random access memory (DRAM), flash memory, cache memory, and/or any other device capable of storing information. Memory 24 may also include memory storage physically located elsewhere in the device 12, such as memory integrated with the processor 22.


The device 12 may be under the control and/or otherwise rely upon various software applications, components, programs, files, objects, modules, etc. (hereinafter, “program code”) residing in memory 24. This program code may include an operating system 34 as well as one or more software applications including one or more task applications 36, and a voice engine 37 that includes a TTS engine 38, and a speech recognition engine 40. The applications may be configured to run on top of the operating system 34 or directly on the processor 22 as “stand-alone” applications. The one or more task applications 36 may be configured to process messages or task instructions for the user 13 by converting the task messages or task instructions into speech output or some other audible output through the voice engine 37. To facilitate synthesizing the speech output, the task application 36 may employ speech synthesis functions provided by TTS engine 38, which converts normal language text into audible speech to play to a user. For the other half of the speech-based system, the device 12 uses speech recognition engine 40 to gather speech inputs from the user and convert the speech to text or other usable system data


The processing circuitry and voice engine 37 provide a mechanism to dynamically modify one or more operational parameters of the TTS engine 38. The text-to-speech engine 38 has at least one, and usually more than one, adjustable operational parameter. To this end, the voice engine 37 may operate with task applications 36 to alter the speed, pitch, volume, language, and/or any other operational parameter of the TTS engine depending on speech dialog, conditions in the operating environment, or certain other conditions or variables. For example, the voice engine 37 may reduce the speed of the TTS engine 38 in response to the user 13 asking for help or entering into an unfamiliar area of the task application 36. Other potential uses of the voice engine 37 include altering the operational parameters of the TTS engine 38 based on one or more system events or one or more environmental conditions or variables in a work environment. As will be understood by a person of ordinary skill in the art, the invention may be implemented in a number of different ways, and the specific programs, objects, or other software components for doing so are not limited specifically to the implementations illustrated.


Referring now to FIG. 3, a flowchart 50 is presented illustrating one specific example of how the invention, through the processing circuitry and voice engine 37, may be used to dynamically improve the intelligibility of a speech prompt. The particular environmental conditions monitored are associated with a type of message or speech prompt being converted by the TTS engine 38. Specifically, the status of the speech prompt being a system message or some other important message is monitored. The message might be associated with a system event, for example. The invention adjusts TTS operational parameters accordingly. In block 52, a system speech prompt is generated or issued to a user through the device 12. If the prompt is a typical prompt and part of the ongoing speech dialog, it will be generated through the TTS engine 38 based on the user settings for the TTS engine 38. However, if the speech prompt is a system message or other high priority message, it may be desirable to make sure it is understood by the user. The current user settings of the TTS operational parameters may be such that the message would be difficult to understand. For example, the speed of the TTS engine 38 may be too fast. This is particularly so if the system message is one that is not normally part of a conventional dialog, and so somewhat unfamiliar to a user. The message may be a commonly issued message, such as a broadcast message informing the user 13 that there is product delivery at the dock; or the message may be a rarely issued message, such as message informing the user 13 of an emergency condition. Because unfamiliar messages may be less intelligible to the user 13 than a commonly heard message, the task application 36 and/or voice engine 37 may temporarily reduce the speed of the TTS engine 38 during the conversion of the unfamiliar message to improve intelligibility.


To that end, and in accordance with an embodiment of the invention, in block 54 the environmental condition of the speech prompt or message type is monitored and the speech prompt is checked to see if it is a system message or system message type. To allow this determination to be made, the message may be flagged as a system message type by the task application 36 of the device 12 or by the central computer system 21. Persons having ordinary skill in the art will understand that there are many ways by which the determination that the speech prompt is a certain type, such as a system message, may be made, and embodiments of the invention are not limited to any particular way of making this determination or of the other types of speech prompts or messages that might be monitored as part of the environmental conditions.


If the speech prompt is determined to not be a system message or some other message type (“No” branch of decision block 54), the task application 36 proceeds to block 62. In block 62, the message is played to the user 13 though the headset 14 in a normal manner according to operational parameter settings of the TTS engine 38 as set by the user. However, if the speech prompt is determined to be a system message or some other type of message (“Yes” branch of decision block 54), the task application 36 proceeds to block 56 and modifies an operational parameter for the TTS engine. In the embodiment of FIG. 3, the processing circuitry reduces the speed setting of the text-to-speech engine 38 from its current user setting. The slower spoken message may thereby be made more intelligible. Of course, the task application 36 and processing circuitry may also modify other TTS engine operational parameters, such as volume or pitch, for example. In some embodiments, the amount by which the speed setting is reduced may be varied depending on the type of message. For example, less common messages may receive a larger reduction in the speed setting. The message may be flagged as common or uncommon, native language or foreign language, as having a high importance or priority, or as a long or short message, with each type of message being played to the user 13 at a suitable speed. The task application 36 then proceeds to play the message to user 13 at the modified operational parameter settings, such as the slower speed setting. The user 13 thereby receives the message as a voice message over the headset 14 at a slower rate that may improve the intelligibility of the message.


Once the message has been played, the task application 36 proceeds to block 60, where the operational parameter (i.e., speed setting) is restored to its previous level or setting. The operational parameters of the text-to-speech engine 38 are thus returned to their normal user settings so the user can proceed as desired in the speech dialog. Usually, the speech dialog will then resume as normal. However, if further monitored conditions dictate, the modified settings might be maintained. Alternatively, the modified setting might be restored only after a certain amount of time has elapsed. Advantageously, embodiments of the invention thereby provide certain messages and message types with operational parameters modified to improve the intelligibility of the message automatically while maintaining the preferred settings of the user 13 under normal conditions for the various task applications 36.


Additional examples of environmental conditions, such as voice data or message types that may be flagged and monitored for improved intelligibility, include messages over a certain length or syllable count, messages that are in a language that is non-native to the TTS engine 38, and messages that are generated when the user 13 requests help, speaks a command, or enters an area of the task application 36 that is not commonly used, and where the user has little experience. While the environmental condition may be based on a message status, or the type of message, or language of the message, length of message, or commonality or frequency of the message, other environmental conditions are also monitored in accordance with embodiments of the invention, and may also be used to modify the operational parameters of the TTS engine 38.


Referring now to FIG. 4, flowchart 70 illustrates another specific example of how an environmental condition may be monitored to improve the intelligibility of a speech-based system message based on input from the user 13, such as a type of command from a user. Specifically, certain user speech, such as spoken commands or types of commands from the user 13, may indicate that they are experiencing difficulties in understanding the audible output or speech prompts from the TTS engine 38. In block 72, a speech prompt is issued by the task application 36 of a device (e.g., “Pick 4 Cases”). The task application 36 then proceeds to block 74 where the task application 36 waits for the user 13 to respond. If the user 13 understands the prompt, the user 13 responds by speaking into the microphone 19 with an appropriate or expected speech phrase (e.g., “4 Cases Picked”). The task application 36 then returns to block 72 (“No” branch of decision block 76), where the next speech prompt in the task is issued (e.g., “Proceed to Aisle 5”).


If, on the other hand, the user 13 does not understand the speech prompt, the user 13 responds with a command type or phrase such as “Say Again”. That is, the speech prompt was not understood, and the user needs it repeated. In this event, the task application 36 proceeds to block 78 (“Yes” branch of decision block 74) where the processing circuitry and task application 36 uses the mechanism provided by the processing circuitry and voice engine 37 to reduce the speed setting of the TTS engine 38. The task application 36 then proceeds to re-play the speech prompt (Block 80) before proceeding to block 82. In block 82, the modified operational parameter, such as speed setting for the TTS engine 38, may be restored to its previous pre-altered setting or original setting before returning to block 74.


As previously described, in block 74, the user 13 responds to the slower replayed speech prompt. If the user 13 understands the repeated and slowed speech prompt, the user response may be an affirmative response (e.g., “4 Cases Picked”) so that the task application proceeds to block 72 and issues the next speech prompt in the task list or dialog. If the user 13 still does not understand the speech prompt, the user may repeat the phrase “Say Again”, causing the task application 36 to again proceed back to block 78, where the process is repeated. Although speed is the operational parameter adjusted in the illustrated example, other operational parameters or combinations of such parameters (e.g., volume, pitch, etc.) may be modified as well.


In an alternative embodiment of the invention, the processing circuitry and task application 36 defers restoring the original setting of the modified operational parameter of the TTS engine 38 until an affirmative response is made by the user 13. For example, if the operational parameter is modified in block 78, the prompt is replayed (Block 80) at the modified setting, and the program flow proceeds by arrow 81 to await the user response (Block 74) without restoring the settings to previous levels. An alternative embodiment also incrementally reduces the speed of the TTS engine 38 each time the user 13 responds with a certain spoken command, such as “Say Again”. Each pass through blocks 76 and 78 thereby further reduces the speed of the TTS engine 38 incrementally until a minimum speed setting is reached or the prompt is understood. Once the prompt is sufficiently slowed so that the user 13 understands the prompt, the user 13 may respond in an affirmative manner (“No” branch of decision block 76). The affirmative response, indicating by the environmental condition a return to a previous state (e.g., user intelligibility), causes the speed setting or other modified operational parameter settings of the TTS engine 38 to be restored to their original or previous settings (Block 83) and the next speech prompt is issued.


Advantageously, embodiments of the invention provide a dynamic modification of an operational parameter of the TTS engine 38 to improve the intelligibility of a TTS message, command, or prompt based on monitoring one or more environmental conditions associated with a user of the speech-based system. More advantageously, in one embodiment, the settings are returned to the previous preferred settings of the user 13 when the environmental condition indicates a return to a previous state, and once the message, command, or prompt has been understood without requiring any additional user action. The amount of time necessary to proceed through the various tasks may thereby be reduced as compared to systems lacking this dynamic modification feature.


While the dynamic modification may be instigated by a specific type of command from the user 13, an environmental condition based on an indication that the user 13 is entering a new or less-familiar area of a task application 36 may also be monitored and used to drive modification of an adjustable operational parameter. For example, if the task application 36 proceeds with dialog that the system has flagged as new or not commonly used by the user 13, the speed parameter of the TTS engine 38 may be reduced or some other operational parameter might be modified.


While several examples noted herein are directed to monitoring environmental conditions related to the intelligibility of the output of the TTS engine 38 that are based upon the specific speech dialog itself, or commands in a speech dialog, or spoken responses from the user 13 that are reflective of intelligibility, other embodiments of the invention are not limited to these monitored environmental conditions or variables. It is therefore understood that there are other environmental conditions directed to the physical operating or work environment of the user 13 that might be monitored rather than the actual dialog of the voice engine 37 and task applications 36. In accordance with another aspect of the invention, such external environmental conditions may also be monitored for the purposes of dynamically and temporarily modifying at least one operational parameter of the TTS engine 38.


The processing circuitry and software of the invention may also monitor one or more external environmental conditions to determine if the user 13 is likely being subjected to adverse working conditions that may affect the intelligibility of the speech from the TTS engine 38. If a determination that the user 13 is encountering such adverse working conditions is made, the voice engine 37 may dynamically override the user settings and modify those operational parameters accordingly. The processing circuitry and task application 36 and/or voice engine 37, may thereby automatically alter the operational parameters of the TTS engine 38 to increase intelligibility of the speech played to the user 13 as disclosed.


Referring now to FIG. 5, a flowchart 90 is presented illustrating one specific example of how the processing circuitry and software, such as task applications and/or voice engine 37, may be used to automatically improve the intelligibility of a voice message, command, or prompt in response to monitoring an environmental condition and a determination that the user 13 is encountering an adverse environment in the workplace. In block 92, a prompt is issued by the task application 36 (e.g., “Pick 4 Cases”). The task application 36 then proceeds to block 94. If the task application 36 makes a determination based on monitored environmental conditions that the user 13 is not working in an adverse environment (“No” branch of decision block 94), the task application 36 proceeds as normal to block 96. In block 96, the prompt is played to the user 13 using the normal or user defined operational parameters of the text-to-speech engine 38. The task application 36 then proceeds to block 98 and waits for a user response in the normal manner.


If the task application 36 makes a determination that the user 13 is in an adverse environment, such as a high ambient noise environment (“Yes” branch of decision block 94), the task application 36 proceeds to block 100. In block 100, the task application 36 and/or voice engine 37 causes the operational parameters of the text-to-speech engine 38 to be altered by, for example, increasing the volume. The task application 36 then proceeds to block 102 where the prompt is played with the modified operational parameter settings before proceeding to block 104. In block 103, a determination is again made, based on the monitored environmental condition, if it is an adverse or noisy environment. If not, and the environmental condition indicates a return to a previous state, i.e., normal noise level, the flow returns to block 104, and the operational parameter settings of the TTS engine 38 are restored to their previous pre-altered or original settings (e.g., the volume is reduced) before proceeding to block 98 where the task manager 36 waits for a user response in the normal manner. If the monitored condition indicates that the environment is still adverse, the modified operational parameter settings remain.


The adverse environment may be indicated by a number of different external factors within the work area of the user 13 and monitored environmental conditions. For example, the ambient noise in the environment may be particularly high due to the presence of noisy equipment, fans, or other factors. A user may also be working in a particularly noisy region of a warehouse. Therefore, in accordance with an embodiment of the invention, the noise level may be monitored with appropriate detectors. The noise level may relate to the intelligibility of the output of the TTS engine 38 because the user may have difficulty in hearing the output due to the ambient noise. To monitor for an adverse environment, certain sensors or detectors may be implemented in the system, such as on the headset or device 12, to monitor such an external environmental variable.


Alternatively, the system 10 and/or the mobile device 12 may provide an indication of a particular adverse environment to the processing circuitry. For example, based upon the actual tasks assigned to the user 13, the system 10 or mobile device 12 may know that the user 13 will be working in a particular environment, such as a freezer environment. Therefore, the monitored environmental condition is the location of a user for their assigned work. Fans in a freezer environment often make the environment noisier. Furthermore, mobile workers working in a freezer environment may be required to wear additional clothing, such as a hat. The user 13 may therefore be listening to the output from the TTS engine 38 through the additional clothing. As such, the system 10 may anticipate that for tasks associated with the freezer environment, an operational parameter of the TTS engine 38 may need to be temporarily modified. For example, the volume setting may need to be increased. Once the user is out of a freezer and returns to the previous state of the monitored environmental condition (i.e., ambient temperature), the operational parameter settings may be returned to a previous or unmodified setting. Other detectors might be used to monitor environmental conditions, such as a thermometer or temperature sensor to sense the temperature of the working environment to indicate the user is in a freezer.


By way of another example, system level data or a sensed condition by the mobile device 12 may indicate that multiple users are operating in the same area as the user 13, thereby adding to the overall noise level of that area. That is, the environmental condition monitored is the proximity of one user to another user. Accordingly, embodiments of the present invention contemplate monitoring one or more of these environmental conditions that relate to the intelligibility of the output of the TTS engine 38, and temporarily modifying the operational parameters of the TTS engine 38 to address the monitored condition or an adverse environment.


To make a determination that the user 13 is subject to an adverse environment, the task application 36 may look at incoming data in near real time. Based on this data, the task application 36 makes intelligent decisions on how to dynamically modify the operational parameters of the TTS engine 38. Environmental variables—or data—that may be used to determine when adverse conditions are likely to exist include high ambient or background noise levels detected at a detector, such as microphone 19. The device 12 may also determine that the user 13 is in close proximity to other users 13 (and thus subjected to higher levels of background noise or talking) by monitoring Bluetooth® signals to detect other nearby devices 12 of other users. The device 12 or headset 14 may also be configured with suitable devices or detectors to monitor an environmental condition associated with the temperature and detect a change in the ambient temperature that would indicate the user 13 has entered a freezer as noted. The processing circuitry task application 36 may also determine that the user is executing a task that requires being in a freezer as noted. In a freezer environment, as noted, the user 13 may be exposed to higher ambient noise levels from fans and may also be wearing additional clothing that would muffle the audio output of the speakers 18 of headset 14. Thus, the task application 36 may be configured to increase the volume setting of the text-to-speech engine 38 in response to the monitored environmental conditions being associated with work in a freezer.


Another monitored environmental condition might be time of day. The task application 36 may take into account the time of day in determining the likely noise levels. For example, third shift may be less noisy than first shift or certain periods of a shift.


In another embodiment of the invention, the experience level of a user might be the environmental condition that is monitored. For example, the total number of hours logged by a specific user 13 may determine the level of user experience (e.g., a less experienced user may require a slower setting in the text-to-speech engine) with a text-to-speech engine, or the level of experience with an area of a task application, or the level of experience with a specific task application. As such, the environmental condition of user experience may be checked by system 10, and used to modify the operational parameters of the TTS engine 38 for certain times or task applications 36. For example, a monitored environmental condition might include monitoring the amount of time logged by a user with a task application, part of a task application, or some other experience metric. The system 10 tracks such experience as a user works.


In accordance with another embodiment of the invention, an environmental condition, such as the number of users in a particular work space or area, may affect the operational parameters of the TTS engine 38. System level data of system 10 indicating that multiple users 13 are being sent to the same location or area may also be utilized as a monitored environmental condition to provide an indication that the user 13 is in close proximity to other users 23. Accordingly, an operational parameter such as speed or volume may be adjusted. Likewise, system data indicating that the user 13 is in a location that is known to be noisy as noted (e.g., the user responds to a prompt indicating they are in aisle 5, which is a known noisy location) may be used as a monitored environmental condition to adjust the text-to-speech operational parameters. As noted above, other location or area based information, such as if the user is making a pick in a freezer where they may be wearing a hat or other protective equipment that muffles the output of the headset speakers 18 may be a monitored environmental condition, and may also trigger the task application 36 to increase the volume setting or reduce the speed and/or pitch settings of the text-to-speech engine 38, for example.


It should be further understood that there are many other monitored environmental conditions or variables or reasons why it may be desirable to alter the operational parameters of the text-to-speech engine 38 in response to a message, command, or prompt. In one embodiment, an environmental condition that is monitored is the length of the message or prompt being converted by the text-to-speech engine. Another is the language of the message or prompt. Still another environmental condition might be the frequency that a message or prompt is used by a task application to indicate how frequently a user has dealt with the message/prompt. Additional examples of speech prompts or messages that may be flagged for improved intelligibility include messages that are over a certain length or syllable count, messages that are in a language that is non-native to the text-to-speech engine 38 or user 13, important system messages, and commands that are generated when the user 13 requests help or enters an area of the task application 36 that is not commonly used by that user so that the user may get messages that they have not heard with great frequency.


Referring now to FIG. 6, a flowchart 110 is presented illustrating another specific example of how embodiments of the invention may be used to automatically improve the intelligibility of a voice prompt in response to a determination that the prompt may be inherently difficult to understand. In block 112, a prompt or utterance is issued by the task application 36 that may contain a portion that may be difficult to understand, such as a nonnative language word. The task application 36 then proceeds to block 114. If the task application 36 determines that the prompt is in the user's native language, and does not contain a non-native word (“No” branch of decision block 94), the task application 36 proceeds to block 116 where the task application 36 plays the prompt using the normal or user defined text-to-speech operational parameters. The task application 36 then proceeds to block 118, where it waits for a user response in the normal manner.


If the task application 36 makes a determination that the prompt contains a non-native word or phrase (e.g., “Boeuf Bourguignon”) (“Yes” branch of decision block 114), the task application 36 proceeds to block 120. In block 120, the operational parameters of the text-to-speech engine 38 are modified to speak that section of the phrase by changing the language setting. The task application 36 then proceeds to block 122 where the prompt or section of the prompt is played using a text-to-speech engine library or database modified or optimized for the language of the non-native word or phrase. The task application 36 then proceeds to block 124. In block 124, the language setting of the text-to-speech engine 38 is restored to its previous or pre-altered setting (e.g., changed from French back to English) before proceeding to block 98 where the task manager 36 waits for a user response in the normal manner.


In some cases, the monitored environmental condition may be a part or section of the speech prompt or utterance that may be unintelligible or difficult to understand with the user selected TTS operational settings for some other reason than the language. A portion may also need to be emphasized because the portion is important. When this occurs, the operational settings of the TTS engine 38 may only require adjustment during playback of a single word or subset of the speech prompt. To this end, the task application 36 may check to see if a portion of the phrase is to be emphasized. So, as illustrated in FIG. 7 (similar to FIG. 6) in block 114, the inquiry may be directed to a prompt containing words or sections of importance or for special emphasis. The dynamic TTS modification is then applied on a word-by-word basis to allow flagged words or subsections of a speech prompt to be played back with altered TTS engine operational settings. That is, the voice engine 37 provides a mechanism whereby the operational parameters of the TTS engine 38 may be altered by the task application 36 for individual spoken words and phrases within a speech prompt. The operational parameters of the TTS engine 38 may thereby be altered to improve the intelligibility of only the words within the speech prompt that need enhancement or emphasis.


The present invention and voice engine 37 may thereby improve the user experience by allowing the processing circuitry and task applications 36 to dynamically adjust text-to-speech operational parameters in response to specific monitored environmental conditions or variables, including working conditions, system events, and user input. The intelligibility of critical spoken data may thereby be improved in the context in which it is given. The invention thus provides a powerful tool that allows task application developers to use system and context aware environmental conditions and variables within speech-based tasks to set or modify text-to-speech operational parameters and characteristics. These modified text-to-speech operational parameters and characteristics may dynamically optimize the user experience while still allowing the user to select their original or preferable TTS operational parameters.


A person having ordinary skill in the art will recognize that the environments and specific examples illustrated in FIGS. 1-7 are not intended to limit the scope of embodiments of the invention. In particular, the speech-based system 10, device 12, and/or the central computer system 21 may include fewer or additional components, or alternative configurations, consistent with alternative embodiments of the invention. As another example, the device 12 and headset 14 may be configured to communicate wirelessly. As yet another example, the device 12 and headset 14 may be integrated into a single, self-contained unit that may be worn by the user 13.


Furthermore, while specific operational parameters are noted with respect to the monitored environmental conditions and variables of the examples herein, other operational parameters may also be modified as necessary to increase intelligibility of the output of a TTS engine. For example, operational parameters, such as pitch or speed, may also be adjusted when volume is adjusted. Or, if the speed has slowed down, the volume may be raised. Accordingly, the present invention is not limited to the number of parameters that may be modified or the specific ways in which the operational parameters of the TTS engine may be modified temporarily based on monitored environmental conditions.


Thus, a person having skill in the art will recognize that other alternative hardware and/or software environments may be used without departing from the scope of the invention. For example, a person having ordinary skill in the art will appreciate that the device 12 may include more or fewer applications disposed therein. Furthermore, as noted, the device 12 could be a mobile device or stationary device as long at the user can be mobile and still interface with the device. As such, other alternative hardware and software environments may be used without departing from the scope of embodiments of the invention. Still further, the functions and steps described with respect to the task application 36 may be performed by or distributed among other applications, such as voice engine 37, text-to-speech engine 38, speech recognition engine 40, and/or other applications not shown. Moreover, a person having ordinary skill in the art will appreciate that the terminology used to describe various pieces of data, task messages, task instructions, voice dialogs, speech output, speech input, and machine readable input are merely used for purposes of differentiation and are not intended to be limiting.


The routines executed to implement the embodiments of the invention, whether implemented as part of an operating system or a specific application, component, program, object, module or sequence of instructions executed by one or more computing systems are referred to herein as a “sequence of operations”, a “program product”, or, more simply, “program code”. The program code typically comprises one or more instructions that are resident at various times in various memory and storage devices in a computing system (e.g., the device 12 and/or central computer 21), and that, when read and executed by one or more processors of the computing system, cause that computing system to perform the steps necessary to execute steps, elements, and/or blocks embodying the various aspects of embodiments of the invention.


While embodiments of the invention have been described in the context of fully functioning computing systems, those skilled in the art will appreciate that the various embodiments of the invention are capable of being distributed as a program product in a variety of forms, and that the invention applies equally regardless of the particular type of computer readable media or other form used to actually carry out the distribution. Examples of computer readable media include but are not limited to physical and tangible recordable type media such as volatile and nonvolatile memory devices, floppy and other removable disks, hard disk drives, optical disks (e.g., CD-ROM's, DVD's, Blu-Ray disks, etc.), among others. Other forms might include remote hosted services, cloud based offerings, software-as-a-service (SAS) and other forms of distribution.


While the present invention has been illustrated by a description of the various embodiments and the examples, and while these embodiments have been described in considerable detail, it is not the intention of the applicants to restrict or in any way limit the scope of the appended claims to such detail. Additional advantages and modifications will readily appear to those skilled in the art.


As such, the invention in its broader aspects is therefore not limited to the specific details, apparatuses, and methods shown and described herein. A person having ordinary skill in the art will appreciate that any of the blocks of the above flowcharts may be deleted, augmented, made to be simultaneous with another, combined, looped, or be otherwise altered in accordance with the principles of the embodiments of the invention. Accordingly, departures may be made from such details without departing from the scope of applicants' general inventive concept.

Claims
  • 1. A communication system comprising: a speech recognition system configured to gather speech inputs from a user and convert the speech inputs into text;a text-to-speech engine configured to provide an audible output to the user; andprocessing circuitry configured to: access an inventory management system that is configured to provide one or more tasks, wherein the one or more tasks are audibly output via the text-to-speech engine to the user;monitor an environmental condition;modify an operational parameter of at least one of the text-to-speech engine and the speech recognition system based on the monitored environmental condition, wherein the environmental condition is an experience level of the user with at least one of the text-to-speech engine, the speech recognition system, and an area of a task application; andcause a task of the one or more tasks to be audibly output.
  • 2. The communication system of claim 1, wherein the task of the one or more tasks is an indication to pick a quantity of an item in warehouse, and wherein the user input acknowledgement is an indication that the quantity of the item has been picked.
  • 3. The communication system of claim 1, wherein the processing circuitry is further configured to generate another task of the one or more tasks based on the experience level of the user with at least one of the text-to-speech engine, the speech recognition system, and the area of the task application; and audibly output the task and the another task.
  • 4. The communication system of claim 3, wherein the processing circuitry that is configured to receive the user input acknowledgement is further configured to receive the user input acknowledgement in response to at least one of the task and the another task.
  • 5. The communication system of claim 3, wherein the processing circuitry is configured to receive the user input acknowledgement in response to each of the task and the another task before a next task of the one or more tasks is audibly output.
  • 6. The communication system of claim 1, wherein the processing circuitry is further configured to restore the operational parameter of the text-to-speech engine to a previous setting after a predefined amount of time has elapsed.
  • 7. The communication system of claim 1, wherein the monitored environmental condition further comprises at least one of: a type of a message being converted by the text-to-speech engine; a type of a command received from the user; an ambient temperature of the user's environment; an amount of time logged by the user with the task application; a language of the message being converted by the text-to-speech engine; a length of the message being converted by the text-to-speech engine; and a frequency that the message being converted by the text-to-speech engine is used by the task application.
  • 8. The communication system of claim 1, wherein the user input acknowledgement is received via a user headset, wherein the headset comprises a speaker and a microphone.
  • 9. The communication system of claim 1, wherein the processing circuitry is further configured to log the user into the inventory management system based on a decoded indica scanned by an identification code reader.
  • 10. A communication system comprising: a speech recognition system configured to gather speech inputs from a user and convert the speech inputs to text;a text-to-speech engine configured to provide an audible output to the user; andprocessing circuitry configured to: access an inventory management system that is configured to provide one or more tasks, wherein the one or more tasks are audibly output via the text-to-speech engine to the user;monitor an environmental condition, wherein the environmental condition is an ambient noise level;modify an operational parameter of at least one of the text-to-speech engine and the speech recognition system based on the monitored environmental condition; andcause a task of the one or more tasks to be audibly output.
  • 11. The communication system of claim 10, wherein the task of the one or more tasks is an indication to pick a quantity of an item in warehouse, and wherein the user input acknowledgement is an indication that the quantity of the item has been picked.
  • 12. The communication system of claim 10, wherein the processing circuitry is further configured to generate another task of the one or more tasks based on an experience level of the user with at least one of the text-to-speech engine, the speech recognition system, and an area of a task application, the task; and audibly output the task and the another task.
  • 13. The communication system of claim 12, wherein the processing circuitry that is configured to receive the user input acknowledgement is further configured to receive the user input acknowledgement in response to at least one of the task and the another task.
  • 14. The communication system of claim 12, wherein the processing circuitry is configured to receive the user input acknowledgement in response to each of the task and the another task before a next task of the one or more tasks is audibly output.
  • 15. The communication system of claim 10, wherein the monitored environmental condition further comprises at least one of: a type of a message being converted by the text-to-speech engine; a type of a command received from the user; an ambient temperature of the user's environment; an experience level of the user with the text-to-speech engine; an experience level of the user with an area of a task application; an amount of time logged by the user with the task application; a language of the message being converted by the text-to-speech engine; a length of the message being converted by the text-to-speech engine; and a frequency that the message being converted by the text-to-speech engine is used by the task application.
  • 16. The communication system of claim 10, wherein the user input acknowledgement is received via a user headset, wherein the user headset comprises a speaker and a microphone and wherein the processing circuitry is further configured to log the user into the inventory management system based on a decoded indica scanned by an identification code reader.
  • 17. A communication system comprising: a speech recognition system configured to gather speech inputs from a user and convert the speech inputs to text;a text-to-speech engine configured to provide an audible output to the user; andprocessing circuitry configured to: access an inventory management system that is configured to provide one or more tasks, wherein the one or more tasks are audibly output via the text-to-speech engine to the user, wherein the one or more tasks comprise at least one of a type of item, a number of items, and a location of items in a warehouse;monitor an environmental condition, wherein the environmental condition comprises an ambient noise level and an experience level of the user with at least one of the text-to-speech engine, the speech recognition system, and an area of a task application;modify an operational parameter of at least one of the text-to-speech engine and the speech recognition system based on the monitored environmental condition; andcause a task of the one or more tasks to be audibly output.
  • 18. The communication system of claim 17, wherein the user input acknowledgement is an indication that the quantity of the item has been picked.
  • 19. The communication system of claim 17, wherein the processing circuitry is further configured to generate another task of the one or more tasks based on the experience level of the user with at least one of the text-to-speech engine, the speech recognition system, and the area of the task application; and audibly output the task and the another task.
  • 20. The communication system of claim 19, wherein the processing circuitry that is configured to receive the user input acknowledgement is further configured to receive the user input acknowledgement in response to at least one of the task and the another task.
  • 21. The communication system of claim 19, wherein the processing circuitry is configured to receive the user input acknowledgement in response to each of the task and the another task before a next task of the one or more tasks is audibly output.
  • 22. The communication system of claim 17, wherein the monitored environmental condition further comprises at least one of: a type of a message being converted by the text-to-speech engine; a type of a command received from the user; an ambient temperature of the user's environment; an amount of time logged by the user with the task application; a language of the message being converted by the text-to-speech engine; a length of the message being converted by the text-to-speech engine; and a frequency that the message being converted by the text-to-speech engine is used by the task application.
  • 23. The communication system of claim 17, wherein the user input acknowledgement is received via a user headset, wherein the user headset comprises a speaker and a microphone and wherein the processing circuitry is further configured to log the user into the inventory management system based on a decoded indica scanned by an identification code reader.
  • 24. A method comprising: accessing an inventory management system that is configured to provide one or more tasks, wherein the one or more tasks are audibly output via a text-to-speech engine to a user, wherein the one or more tasks comprise at least one of a type of item, a number of items, and a location of items in a warehouse, and wherein the text-to-speech engine is configured to provide an audible output to the user;monitoring an environmental condition, wherein the environmental condition comprises an ambient noise level and an experience level of the user with at least one of the text-to-speech engine, a speech recognition system, and an area of a task application, wherein the speech recognition system is configured to gather speech inputs from the user and convert the speech inputs to text;modifying an operational parameter of at least one of the text-to-speech engine and the speech recognition system based on the monitored environmental condition; andcausing a task of the one or more tasks to be audibly output.
  • 25. The method of claim 24, wherein the user input acknowledgement is an indication that the quantity of the item has been picked.
  • 26. The method of claim 24, further comprising generating another task of the one or more tasks based on the experience level of the user with at least one of the text-to-speech engine, the speech recognition system, and the area of the task application; audibly outputting the task and the another task.
  • 27. The method of claim 26, further comprising receiving the user input acknowledgement in response to each of the task and the another task.
  • 28. The method of claim 26, further comprising receiving user input acknowledgement in response to each of the task and the another task before causing a next task of the one or more tasks to be audibly output.
  • 29. The method of claim 24, wherein the monitored environmental condition further comprises at least one of: a type of a message being converted by the text-to-speech engine; a type of a command received from the user; an ambient temperature of the user's environment; an amount of time logged by the user with the task application; a language of the message being converted by the text-to-speech engine; a length of the message being converted by the text-to-speech engine; and a frequency that the message being converted by the text-to-speech engine is used by the task application.
  • 30. The method of claim 24, wherein the user input acknowledgement is received via a user headset, wherein the user headset comprises a speaker and a microphone.
CROSS-REFERENCE TO RELATED APPLICATIONS

The present application is a continuation of U.S. patent application Ser. No. 16/869,228, titled Systems and Methods for Dynamically Improving User Intelligibility of Synthesized Speech in a Work Environment, filed May 7, 2020, which is a continuation of U.S. patent application Ser. No. 15/635,326, titled Systems and Methods for Dynamically Improving User Intelligibility of Synthesized Speech in a Work Environment, filed Jun. 28, 2017 (now U.S. Pat. No. 10,685,643), which is a continuation of U.S. patent application Ser. No. 14/561,648 for Systems and Methods for Dynamically Improving User Intelligibility of Synthesized Speech in a Work Environment filed Dec. 5, 2014 (now U.S. Pat. No. 9,697,818), which claims the benefit of U.S. patent application Ser. No. 13/474,921 for Systems and Methods for Dynamically Improving User Intelligibility of Synthesized Speech in a Work Environment filed May 18, 2012, (now U.S. Pat. No. 8,914,290), which claims the benefit of U.S. Patent Application No. 61/488,587 for Systems and Methods for Dynamically Improving User Intelligibility of Synthesized Speech in a Work Environment filed May 20, 2011. Each of the foregoing patent applications, patent publications, and patents is hereby incorporated by reference in its entirety.

US Referenced Citations (776)
Number Name Date Kind
4882757 Fisher et al. Nov 1989 A
4928302 Kaneuchi et al. May 1990 A
4959864 Van et al. Sep 1990 A
4977598 Doddington et al. Dec 1990 A
5127043 Hunt et al. Jun 1992 A
5127055 Larkey Jun 1992 A
5230023 Nakano Jul 1993 A
5297194 Hunt et al. Mar 1994 A
5349645 Zhao Sep 1994 A
5428707 Gould et al. Jun 1995 A
5457768 Tsuboi et al. Oct 1995 A
5465317 Epstein Nov 1995 A
5488652 Bielby et al. Jan 1996 A
5566272 Brems et al. Oct 1996 A
5602960 Hon et al. Feb 1997 A
5625748 McDonough et al. Apr 1997 A
5640485 Ranta Jun 1997 A
5644680 Bielby et al. Jul 1997 A
5651094 Takagi et al. Jul 1997 A
5684925 Morin et al. Nov 1997 A
5710864 Juang et al. Jan 1998 A
5717826 Setlur et al. Feb 1998 A
5737489 Chou et al. Apr 1998 A
5737724 Atal et al. Apr 1998 A
5742928 Suzuki Apr 1998 A
5774837 Yeldener et al. Jun 1998 A
5774841 Salazar et al. Jun 1998 A
5774858 Taubkin et al. Jun 1998 A
5787387 Aguilar Jul 1998 A
5797123 Chou et al. Aug 1998 A
5799273 Mitchell et al. Aug 1998 A
5832430 Lleida et al. Nov 1998 A
5839103 Mammone et al. Nov 1998 A
5842163 Weintraub Nov 1998 A
5870706 Alshawi Feb 1999 A
5890108 Yeldener Mar 1999 A
5893057 Fujimoto et al. Apr 1999 A
5893059 Raman Apr 1999 A
5893902 Transue et al. Apr 1999 A
5895447 Ittycheriah et al. Apr 1999 A
5899972 Miyazawa et al. May 1999 A
5946658 Miyazawa et al. Aug 1999 A
5960447 Holt et al. Sep 1999 A
5970450 Hattori Oct 1999 A
6003002 Netsch Dec 1999 A
6006183 Lai et al. Dec 1999 A
6073096 Gao et al. Jun 2000 A
6076057 Narayanan et al. Jun 2000 A
6088669 Maes Jul 2000 A
6094632 Hattori Jul 2000 A
6101467 Bartosik Aug 2000 A
6122612 Goldberg Sep 2000 A
6151574 Lee et al. Nov 2000 A
6182038 Balakrishnan et al. Jan 2001 B1
6192343 Morgan et al. Feb 2001 B1
6205426 Nguyen et al. Mar 2001 B1
6230129 Morin et al. May 2001 B1
6230138 Everhart May 2001 B1
6233555 Parthasarathy et al. May 2001 B1
6233559 Balakrishnan May 2001 B1
6243713 Nelson et al. Jun 2001 B1
6246980 Glorion et al. Jun 2001 B1
6292782 Weideman Sep 2001 B1
6330536 Parthasarathy et al. Dec 2001 B1
6351730 Chen Feb 2002 B2
6374212 Phillips et al. Apr 2002 B2
6374220 Kao Apr 2002 B1
6374221 Haimi-Cohen Apr 2002 B1
6374227 Ye Apr 2002 B1
6377662 Hunt et al. Apr 2002 B1
6377949 Gilmour Apr 2002 B1
6397179 Crespo et al. May 2002 B2
6397180 Jaramillo et al. May 2002 B1
6421640 Dolfing et al. Jul 2002 B1
6438519 Campbell et al. Aug 2002 B1
6438520 Curt et al. Aug 2002 B1
6456973 Fado et al. Sep 2002 B1
6487532 Schoofs et al. Nov 2002 B1
6496800 Kong et al. Dec 2002 B1
6505155 Vanbuskirk et al. Jan 2003 B1
6507816 Ortega Jan 2003 B2
6526380 Thelen et al. Feb 2003 B1
6539078 Hunt et al. Mar 2003 B1
6542866 Jiang et al. Apr 2003 B1
6567775 Maali et al. May 2003 B1
6571210 Hon et al. May 2003 B2
6581036 Varney, Jr. Jun 2003 B1
6587824 Everhart et al. Jul 2003 B1
6594629 Basu et al. Jul 2003 B1
6598017 Yamamoto et al. Jul 2003 B1
6606598 Holthouse et al. Aug 2003 B1
6629072 Thelen et al. Sep 2003 B1
6662163 Albayrak et al. Dec 2003 B1
6675142 Ortega et al. Jan 2004 B2
6701293 Bennett et al. Mar 2004 B2
6725199 Brittan et al. Apr 2004 B2
6732074 Kuroda May 2004 B1
6735562 Zhang et al. May 2004 B1
6754627 Woodward Jun 2004 B2
6766295 Murveit et al. Jul 2004 B1
6799162 Goronzy et al. Sep 2004 B1
6813491 McKinney Nov 2004 B1
6829577 Gleason Dec 2004 B1
6832224 Gilmour Dec 2004 B2
6832725 Gardiner et al. Dec 2004 B2
6834265 Balasuriya Dec 2004 B2
6839667 Reich Jan 2005 B2
6856956 Thrasher et al. Feb 2005 B2
6868381 Peters et al. Mar 2005 B1
6868385 Gerson Mar 2005 B1
6871177 Hovell et al. Mar 2005 B1
6876968 Veprek Apr 2005 B2
6876987 Bahler et al. Apr 2005 B2
6879956 Honda et al. Apr 2005 B1
6882972 Kompe et al. Apr 2005 B2
6910012 Hartley et al. Jun 2005 B2
6917918 Rockenbeck et al. Jul 2005 B2
6922466 Peterson et al. Jul 2005 B1
6922669 Schalk et al. Jul 2005 B2
6941264 Konopka et al. Sep 2005 B2
6961700 Mitchell et al. Nov 2005 B2
6961702 Dobler et al. Nov 2005 B2
6985859 Morin Jan 2006 B2
6988068 Fado et al. Jan 2006 B2
6999931 Zhou Feb 2006 B2
7010489 Lewis et al. Mar 2006 B1
7031918 Hwang Apr 2006 B2
7035800 Tapper Apr 2006 B2
7039166 Peterson et al. May 2006 B1
7050550 Steinbiss et al. May 2006 B2
7058575 Zhou Jun 2006 B2
7062435 Tzirkel-Hancock et al. Jun 2006 B2
7062441 Townshend Jun 2006 B1
7065488 Yajima et al. Jun 2006 B2
7069513 Damiba Jun 2006 B2
7072750 Pi et al. Jul 2006 B2
7072836 Shao Jul 2006 B2
7103542 Doyle Sep 2006 B2
7103543 Hernandez-Abrego et al. Sep 2006 B2
7128266 Zhu et al. Oct 2006 B2
7159783 Walczyk et al. Jan 2007 B2
7203644 Anderson et al. Apr 2007 B2
7203651 Baruch et al. Apr 2007 B2
7216148 Matsunami et al. May 2007 B2
7225127 Lucke May 2007 B2
7240010 Papadimitriou et al. Jul 2007 B2
7266494 Droppo et al. Sep 2007 B2
7272556 Aguilar et al. Sep 2007 B1
7305340 Rosen et al. Dec 2007 B1
7319960 Riis et al. Jan 2008 B2
7386454 Gopinath et al. Jun 2008 B2
7392186 Duan et al. Jun 2008 B2
7401019 Seide et al. Jul 2008 B2
7406413 Geppert et al. Jul 2008 B2
7413127 Ehrhart et al. Aug 2008 B2
7430509 Jost et al. Sep 2008 B2
7454340 Sakai et al. Nov 2008 B2
7457745 Kadambe et al. Nov 2008 B2
7493258 Kibkalo et al. Feb 2009 B2
7542907 Epstein et al. Jun 2009 B2
7565282 Carus et al. Jul 2009 B2
7609669 Sweeney et al. Oct 2009 B2
7684984 Kemp Mar 2010 B2
7726575 Wang et al. Jun 2010 B2
7813771 Escott Oct 2010 B2
7827032 Braho et al. Nov 2010 B2
7865362 Braho et al. Jan 2011 B2
7885419 Wahl et al. Feb 2011 B2
7895039 Braho et al. Feb 2011 B2
7949533 Braho et al. May 2011 B2
7983912 Hirakawa et al. Jul 2011 B2
8200495 Braho et al. Jun 2012 B2
8255219 Braho et al. Aug 2012 B2
8294969 Plesko Oct 2012 B2
8317105 Kotlarsky et al. Nov 2012 B2
8322622 Liu Dec 2012 B2
8366005 Kotlarsky et al. Feb 2013 B2
8371507 Haggerty et al. Feb 2013 B2
8374870 Braho et al. Feb 2013 B2
8376233 Horn et al. Feb 2013 B2
8381979 Franz Feb 2013 B2
8390909 George Mar 2013 B2
8408464 Zhu et al. Apr 2013 B2
8408468 Van et al. Apr 2013 B2
8408469 Good Apr 2013 B2
8424768 Rueblinger et al. Apr 2013 B2
8448863 Xian et al. May 2013 B2
8457013 Ssinger et al. Jun 2013 B2
8459557 Havens et al. Jun 2013 B2
8469272 Kearney Jun 2013 B2
8474712 Kearney et al. Jul 2013 B2
8479992 Kotlarsky et al. Jul 2013 B2
8490877 Kearney Jul 2013 B2
8517271 Kotlarsky et al. Aug 2013 B2
8523076 Good Sep 2013 B2
8528818 Ehrhart et al. Sep 2013 B2
8532282 Bracey Sep 2013 B2
8544737 Gomez et al. Oct 2013 B2
8548420 Grunow et al. Oct 2013 B2
8550335 Samek et al. Oct 2013 B2
8550354 Gannon et al. Oct 2013 B2
8550357 Kearney Oct 2013 B2
8556174 Kosecki et al. Oct 2013 B2
8556176 Van et al. Oct 2013 B2
8556177 Hussey et al. Oct 2013 B2
8559767 Barber et al. Oct 2013 B2
8561895 Gomez et al. Oct 2013 B2
8561903 Sauerwein, Jr. Oct 2013 B2
8561905 Edmonds et al. Oct 2013 B2
8565107 Pease et al. Oct 2013 B2
8571307 Li et al. Oct 2013 B2
8579200 Samek et al. Nov 2013 B2
8583924 Caballero et al. Nov 2013 B2
8584945 Wang et al. Nov 2013 B2
8587595 Wang Nov 2013 B2
8587697 Hussey et al. Nov 2013 B2
8588869 Sauerwein et al. Nov 2013 B2
8590789 Nahill et al. Nov 2013 B2
8596539 Havens et al. Dec 2013 B2
8596542 Havens et al. Dec 2013 B2
8596543 Havens et al. Dec 2013 B2
8599271 Havens et al. Dec 2013 B2
8599957 Peake et al. Dec 2013 B2
8600158 Li et al. Dec 2013 B2
8600167 Showering Dec 2013 B2
8602309 Longacre et al. Dec 2013 B2
8608053 Meier et al. Dec 2013 B2
8608071 Liu et al. Dec 2013 B2
8611309 Wang et al. Dec 2013 B2
8615487 Gomez et al. Dec 2013 B2
8621123 Caballero Dec 2013 B2
8622303 Meier et al. Jan 2014 B2
8628013 Ding Jan 2014 B2
8628015 Wang et al. Jan 2014 B2
8628016 Winegar Jan 2014 B2
8629926 Wang Jan 2014 B2
8630491 Longacre et al. Jan 2014 B2
8635309 Berthiaume et al. Jan 2014 B2
8636200 Kearney Jan 2014 B2
8636212 Nahill et al. Jan 2014 B2
8636215 Ding et al. Jan 2014 B2
8636224 Wang Jan 2014 B2
8638806 Wang et al. Jan 2014 B2
8640958 Lu et al. Feb 2014 B2
8640960 Wang et al. Feb 2014 B2
8643717 Li et al. Feb 2014 B2
8644489 Noble et al. Feb 2014 B1
8646692 Meier et al. Feb 2014 B2
8646694 Wang et al. Feb 2014 B2
8657200 Ren et al. Feb 2014 B2
8659397 Vargo et al. Feb 2014 B2
8668149 Good Mar 2014 B2
8678285 Kearney Mar 2014 B2
8678286 Smith et al. Mar 2014 B2
8682077 Longacre, Jr. Mar 2014 B1
D702237 Oberpriller et al. Apr 2014 S
8687282 Feng et al. Apr 2014 B2
8692927 Pease et al. Apr 2014 B2
8695880 Bremer et al. Apr 2014 B2
8698949 Grunow et al. Apr 2014 B2
8702000 Barber et al. Apr 2014 B2
8717494 Gannon May 2014 B2
8720783 Biss et al. May 2014 B2
8723804 Fletcher et al. May 2014 B2
8723904 Marty et al. May 2014 B2
8727223 Wang May 2014 B2
8740082 Wilz, Sr. Jun 2014 B2
8740085 Furlong et al. Jun 2014 B2
8746563 Hennick et al. Jun 2014 B2
8750445 Peake et al. Jun 2014 B2
8752766 Xian et al. Jun 2014 B2
8756059 Braho et al. Jun 2014 B2
8757495 Qu et al. Jun 2014 B2
8760563 Koziol et al. Jun 2014 B2
8763909 Reed et al. Jul 2014 B2
8777108 Coyle Jul 2014 B2
8777109 Oberpriller et al. Jul 2014 B2
8779898 Havens et al. Jul 2014 B2
8781520 Payne et al. Jul 2014 B2
8783573 Havens et al. Jul 2014 B2
8789757 Barten Jul 2014 B2
8789758 Hawley et al. Jul 2014 B2
8789759 Xian et al. Jul 2014 B2
8794520 Wang et al. Aug 2014 B2
8794522 Ehrhart Aug 2014 B2
8794525 Amundsen et al. Aug 2014 B2
8794526 Wang et al. Aug 2014 B2
8798367 Ellis Aug 2014 B2
8807431 Wang et al. Aug 2014 B2
8807432 Van et al. Aug 2014 B2
8820630 Qu et al. Sep 2014 B2
8822848 Meagher Sep 2014 B2
8824692 Sheerin et al. Sep 2014 B2
8824696 Braho Sep 2014 B2
8842849 Wahl et al. Sep 2014 B2
8844822 Kotlarsky et al. Sep 2014 B2
8844823 Fritz et al. Sep 2014 B2
8849019 Li et al. Sep 2014 B2
D716285 Chaney et al. Oct 2014 S
8851383 Yeakley et al. Oct 2014 B2
8854633 Laffargue et al. Oct 2014 B2
8866963 Grunow et al. Oct 2014 B2
8868421 Braho et al. Oct 2014 B2
8868519 Maloy et al. Oct 2014 B2
8868802 Barten Oct 2014 B2
8868803 Caballero Oct 2014 B2
8870074 Gannon Oct 2014 B1
8879639 Sauerwein, Jr. Nov 2014 B2
8880426 Smith Nov 2014 B2
8881983 Havens et al. Nov 2014 B2
8881987 Wang Nov 2014 B2
8903172 Smith Dec 2014 B2
8908995 Benos et al. Dec 2014 B2
8910870 Li et al. Dec 2014 B2
8910875 Ren et al. Dec 2014 B2
8914290 Hendrickson et al. Dec 2014 B2
8914788 Pettinelli et al. Dec 2014 B2
8915439 Feng et al. Dec 2014 B2
8915444 Havens et al. Dec 2014 B2
8916789 Woodburn Dec 2014 B2
8918250 Hollifield Dec 2014 B2
8918564 Caballero Dec 2014 B2
8925818 Kosecki et al. Jan 2015 B2
8939374 Jovanovski et al. Jan 2015 B2
8942480 Duane Jan 2015 B2
8944313 Williams et al. Feb 2015 B2
8944327 Meier et al. Feb 2015 B2
8944332 Harding et al. Feb 2015 B2
8950678 Germaine et al. Feb 2015 B2
D723560 Zhou et al. Mar 2015 S
8967468 Gomez et al. Mar 2015 B2
8971346 Sevier Mar 2015 B2
8976030 Cunningham et al. Mar 2015 B2
8976368 El et al. Mar 2015 B2
8978981 Guan Mar 2015 B2
8978983 Bremer et al. Mar 2015 B2
8978984 Hennick et al. Mar 2015 B2
8985456 Zhu et al. Mar 2015 B2
8985457 Soule et al. Mar 2015 B2
8985459 Kearney et al. Mar 2015 B2
8985461 Gelay et al. Mar 2015 B2
8988578 Showering Mar 2015 B2
8988590 Gillet et al. Mar 2015 B2
8991704 Hopper et al. Mar 2015 B2
8996194 Davis et al. Mar 2015 B2
8996384 Funyak et al. Mar 2015 B2
8998091 Edmonds et al. Apr 2015 B2
9002641 Showering Apr 2015 B2
9007368 Laffargue et al. Apr 2015 B2
9010641 Qu et al. Apr 2015 B2
9015513 Murawski et al. Apr 2015 B2
9016576 Brady et al. Apr 2015 B2
D730357 Fitch et al. May 2015 S
9022288 Nahill et al. May 2015 B2
9030964 Essinger et al. May 2015 B2
9033240 Smith et al. May 2015 B2
9033242 Gillet et al. May 2015 B2
9036054 Koziol et al. May 2015 B2
9037344 Chamberlin May 2015 B2
9038911 Xian et al. May 2015 B2
9038915 Smith May 2015 B2
D730901 Oberpriller et al. Jun 2015 S
D730902 Fitch et al. Jun 2015 S
D733112 Chaney et al. Jun 2015 S
9047098 Barten Jun 2015 B2
9047359 Caballero et al. Jun 2015 B2
9047420 Caballero Jun 2015 B2
9047525 Barber et al. Jun 2015 B2
9047531 Showering et al. Jun 2015 B2
9047865 Aguilar et al. Jun 2015 B2
9049640 Wang et al. Jun 2015 B2
9053055 Caballero Jun 2015 B2
9053378 Hou et al. Jun 2015 B1
9053380 Xian et al. Jun 2015 B2
9057641 Amundsen et al. Jun 2015 B2
9058526 Powilleit Jun 2015 B2
9064165 Havens et al. Jun 2015 B2
9064167 Xian et al. Jun 2015 B2
9064168 Todeschini et al. Jun 2015 B2
9064254 Todeschini et al. Jun 2015 B2
9066032 Wang Jun 2015 B2
9070032 Corcoran Jun 2015 B2
D734339 Zhou et al. Jul 2015 S
D734751 Oberpriller et al. Jul 2015 S
9082023 Feng et al. Jul 2015 B2
9135913 Toru Sep 2015 B2
9224022 Ackley et al. Dec 2015 B2
9224027 Van et al. Dec 2015 B2
D747321 London et al. Jan 2016 S
9230140 Ackley Jan 2016 B1
9250712 Todeschini Feb 2016 B1
9258033 Showering Feb 2016 B2
9261398 Amundsen et al. Feb 2016 B2
9262633 Todeschini et al. Feb 2016 B1
9262664 Soule et al. Feb 2016 B2
9274806 Barten Mar 2016 B2
9282501 Wang et al. Mar 2016 B2
9292969 Laffargue et al. Mar 2016 B2
9298667 Caballero Mar 2016 B2
9310609 Rueblinger et al. Apr 2016 B2
9319548 Showering et al. Apr 2016 B2
D757009 Oberpriller et al. May 2016 S
9342724 McCloskey et al. May 2016 B2
9342827 Smith May 2016 B2
9355294 Smith et al. May 2016 B2
9367722 Xian et al. Jun 2016 B2
9375945 Bowles Jun 2016 B1
D760719 Zhou et al. Jul 2016 S
9390596 Todeschini Jul 2016 B1
9396375 Qu et al. Jul 2016 B2
9398008 Todeschini et al. Jul 2016 B2
D762604 Fitch et al. Aug 2016 S
D762647 Fitch et al. Aug 2016 S
9407840 Wang Aug 2016 B2
9412242 Van et al. Aug 2016 B2
9418252 Nahill et al. Aug 2016 B2
D766244 Zhou et al. Sep 2016 S
9443123 Hejl Sep 2016 B2
9443222 Singel et al. Sep 2016 B2
9448610 Davis et al. Sep 2016 B2
9478113 Xie et al. Oct 2016 B2
D771631 Fitch et al. Nov 2016 S
9507974 Todeschini Nov 2016 B1
D777166 Bidwell et al. Jan 2017 S
9582696 Barber et al. Feb 2017 B2
D783601 Schulte et al. Apr 2017 S
9616749 Chamberlin Apr 2017 B2
9618993 Murawski et al. Apr 2017 B2
D785617 Bidwell et al. May 2017 S
D785636 Oberpriller et al. May 2017 S
D790505 Vargo et al. Jun 2017 S
D790546 Zhou et al. Jun 2017 S
D790553 Fitch et al. Jun 2017 S
9697818 Hendrickson et al. Jul 2017 B2
9715614 Todeschini et al. Jul 2017 B2
9728188 Rosen et al. Aug 2017 B1
9734493 Gomez et al. Aug 2017 B2
9786101 Ackley Oct 2017 B2
9813799 Gecawicz et al. Nov 2017 B2
9857167 Jovanovski et al. Jan 2018 B2
9891612 Charpentier et al. Feb 2018 B2
9891912 Balakrishnan et al. Feb 2018 B2
9892876 Bandringa Feb 2018 B2
9954871 Hussey et al. Apr 2018 B2
9978088 Pape May 2018 B2
10007112 Fitch et al. Jun 2018 B2
10019334 Caballero et al. Jul 2018 B2
10021043 Sevier Jul 2018 B2
10038716 Todeschini et al. Jul 2018 B2
10066982 Ackley et al. Sep 2018 B2
10327158 Wang et al. Jun 2019 B2
10360728 Venkatesha et al. Jul 2019 B2
10401436 Young et al. Sep 2019 B2
10410029 Powilleit Sep 2019 B2
10685643 Hendrickson et al. Jun 2020 B2
10714121 Hardek Jul 2020 B2
10732226 Kohtz et al. Aug 2020 B2
10909490 Raj et al. Feb 2021 B2
11158336 Hardek Oct 2021 B2
20020007273 Chen Jan 2002 A1
20020054101 Beatty May 2002 A1
20020128838 Veprek Sep 2002 A1
20020129139 Ramesh Sep 2002 A1
20020138274 Sharma et al. Sep 2002 A1
20020143540 Malayath et al. Oct 2002 A1
20020145516 Moskowitz et al. Oct 2002 A1
20020152071 Chaiken et al. Oct 2002 A1
20020178004 Chang et al. Nov 2002 A1
20020178074 Bloom Nov 2002 A1
20020184027 Brittan et al. Dec 2002 A1
20020184029 Brittan et al. Dec 2002 A1
20020198712 Hinde et al. Dec 2002 A1
20030023438 Schramm et al. Jan 2003 A1
20030061049 Erten Mar 2003 A1
20030120486 Brittan et al. Jun 2003 A1
20030141990 Coon Jul 2003 A1
20030191639 Mazza Oct 2003 A1
20030220791 Toyama Nov 2003 A1
20040181461 Raiyani et al. Sep 2004 A1
20040181467 Raiyani et al. Sep 2004 A1
20040193422 Fado et al. Sep 2004 A1
20040215457 Meyer Oct 2004 A1
20040230420 Kadambe et al. Nov 2004 A1
20040242160 Ichikawa et al. Dec 2004 A1
20050044129 McCormack et al. Feb 2005 A1
20050049873 Bartur et al. Mar 2005 A1
20050055205 Jersak et al. Mar 2005 A1
20050070337 Byford Mar 2005 A1
20050071158 Byford Mar 2005 A1
20050071161 Shen Mar 2005 A1
20050080627 Hennebert et al. Apr 2005 A1
20050177369 Stoimenov et al. Aug 2005 A1
20060235739 Levis et al. Oct 2006 A1
20070063048 Havens et al. Mar 2007 A1
20070080930 Logan et al. Apr 2007 A1
20070184881 Wahl et al. Aug 2007 A1
20080052068 Aguilar et al. Feb 2008 A1
20080185432 Caballero et al. Aug 2008 A1
20080280653 Ma Nov 2008 A1
20090006164 Kaiser et al. Jan 2009 A1
20090099849 Iwasawa Apr 2009 A1
20090134221 Zhu et al. May 2009 A1
20090164902 Cohen et al. Jun 2009 A1
20090192705 Golding et al. Jul 2009 A1
20100057465 Kirsch et al. Mar 2010 A1
20100177076 Essinger et al. Jul 2010 A1
20100177080 Essinger et al. Jul 2010 A1
20100177707 Essinger et al. Jul 2010 A1
20100177749 Essinger et al. Jul 2010 A1
20100226505 Kimura Sep 2010 A1
20100250243 Schalk et al. Sep 2010 A1
20100265880 Rautiola et al. Oct 2010 A1
20110029312 Braho et al. Feb 2011 A1
20110029313 Braho et al. Feb 2011 A1
20110093269 Braho et al. Apr 2011 A1
20110119623 Kim May 2011 A1
20110169999 Grunow et al. Jul 2011 A1
20110202554 Powilleit et al. Aug 2011 A1
20110208521 McClain Aug 2011 A1
20110237287 Klein et al. Sep 2011 A1
20110282668 Stefan et al. Nov 2011 A1
20120111946 Golant May 2012 A1
20120168511 Kotlarsky et al. Jul 2012 A1
20120168512 Kotlarsky et al. Jul 2012 A1
20120193423 Samek Aug 2012 A1
20120197678 Ristock et al. Aug 2012 A1
20120203647 Smith Aug 2012 A1
20120223141 Good et al. Sep 2012 A1
20120228382 Havens et al. Sep 2012 A1
20120248188 Kearney Oct 2012 A1
20120253548 Davidson Oct 2012 A1
20130043312 Van Horn Feb 2013 A1
20130075168 Amundsen et al. Mar 2013 A1
20130080173 Talwar et al. Mar 2013 A1
20130082104 Kearney et al. Apr 2013 A1
20130090089 Rivere Apr 2013 A1
20130175341 Kearney et al. Jul 2013 A1
20130175343 Good Jul 2013 A1
20130257744 Daghigh et al. Oct 2013 A1
20130257759 Daghigh Oct 2013 A1
20130270346 Xian et al. Oct 2013 A1
20130287258 Kearney Oct 2013 A1
20130292475 Kotlarsky et al. Nov 2013 A1
20130292477 Iennick et al. Nov 2013 A1
20130293539 Hunt et al. Nov 2013 A1
20130293540 Affargue et al. Nov 2013 A1
20130306728 Thuries et al. Nov 2013 A1
20130306731 Pedrao Nov 2013 A1
20130307964 Bremer et al. Nov 2013 A1
20130308625 Park et al. Nov 2013 A1
20130313324 Koziol et al. Nov 2013 A1
20130313325 Wilz et al. Nov 2013 A1
20130325763 Cantor et al. Dec 2013 A1
20130342717 Havens et al. Dec 2013 A1
20140001267 Giordano et al. Jan 2014 A1
20140002828 Affargue et al. Jan 2014 A1
20140008439 Wang Jan 2014 A1
20140025584 Liu et al. Jan 2014 A1
20140034734 Sauerwein, Jr. Feb 2014 A1
20140036848 Pease et al. Feb 2014 A1
20140039693 Havens et al. Feb 2014 A1
20140042814 Kather et al. Feb 2014 A1
20140049120 Kohtz et al. Feb 2014 A1
20140049635 Laffargue et al. Feb 2014 A1
20140058801 Deodhar et al. Feb 2014 A1
20140061306 Wu et al. Mar 2014 A1
20140063289 Hussey et al. Mar 2014 A1
20140066136 Sauerwein et al. Mar 2014 A1
20140067692 Ye et al. Mar 2014 A1
20140070005 Nahill et al. Mar 2014 A1
20140071840 Venancio Mar 2014 A1
20140074746 Wang Mar 2014 A1
20140076974 Havens et al. Mar 2014 A1
20140078341 Havens et al. Mar 2014 A1
20140078342 Li et al. Mar 2014 A1
20140078345 Showering Mar 2014 A1
20140097249 Gomez et al. Apr 2014 A1
20140098792 Wang et al. Apr 2014 A1
20140100774 Showering Apr 2014 A1
20140100813 Showering Apr 2014 A1
20140103115 Meier et al. Apr 2014 A1
20140104413 McCloskey et al. Apr 2014 A1
20140104414 McCloskey et al. Apr 2014 A1
20140104416 Giordano et al. Apr 2014 A1
20140104451 Todeschini et al. Apr 2014 A1
20140106594 Skvoretz Apr 2014 A1
20140106725 Sauerwein, Jr. Apr 2014 A1
20140108010 Maltseff et al. Apr 2014 A1
20140108402 Gomez et al. Apr 2014 A1
20140108682 Caballero Apr 2014 A1
20140110485 Toa et al. Apr 2014 A1
20140114530 Fitch et al. Apr 2014 A1
20140124577 Wang et al. May 2014 A1
20140124579 Ding May 2014 A1
20140125842 Winegar May 2014 A1
20140125853 Wang May 2014 A1
20140125999 Longacre et al. May 2014 A1
20140129378 Richardson May 2014 A1
20140131438 Kearney May 2014 A1
20140131441 Nahill et al. May 2014 A1
20140131443 Smith May 2014 A1
20140131444 Wang May 2014 A1
20140131445 Ding et al. May 2014 A1
20140131448 Xian et al. May 2014 A1
20140133379 Wang et al. May 2014 A1
20140136208 Maltseff et al. May 2014 A1
20140140585 Wang May 2014 A1
20140151453 Meier et al. Jun 2014 A1
20140152882 Samek et al. Jun 2014 A1
20140158770 Sevier et al. Jun 2014 A1
20140159869 Zumsteg et al. Jun 2014 A1
20140166755 Liu et al. Jun 2014 A1
20140166757 Smith Jun 2014 A1
20140166759 Liu et al. Jun 2014 A1
20140168787 Wang et al. Jun 2014 A1
20140175165 Havens et al. Jun 2014 A1
20140175172 Jovanovski et al. Jun 2014 A1
20140191644 Chaney Jul 2014 A1
20140191913 Ge et al. Jul 2014 A1
20140197238 Liu et al. Jul 2014 A1
20140197239 Havens et al. Jul 2014 A1
20140197304 Feng et al. Jul 2014 A1
20140203087 Smith et al. Jul 2014 A1
20140204268 Grunow et al. Jul 2014 A1
20140214631 Hansen Jul 2014 A1
20140217166 Berthiaume et al. Aug 2014 A1
20140217180 Liu Aug 2014 A1
20140231500 Ehrhart et al. Aug 2014 A1
20140232930 Anderson Aug 2014 A1
20140247315 Marty et al. Sep 2014 A1
20140263493 Amurgis et al. Sep 2014 A1
20140263645 Smith et al. Sep 2014 A1
20140267609 Laffargue Sep 2014 A1
20140270196 Braho et al. Sep 2014 A1
20140270229 Braho Sep 2014 A1
20140278387 Digregorio Sep 2014 A1
20140278391 Braho et al. Sep 2014 A1
20140282210 Bianconi Sep 2014 A1
20140284384 Lu et al. Sep 2014 A1
20140288933 Braho et al. Sep 2014 A1
20140297058 Barker et al. Oct 2014 A1
20140299665 Barber et al. Oct 2014 A1
20140312121 Lu et al. Oct 2014 A1
20140319220 Coyle Oct 2014 A1
20140319221 Oberpriller et al. Oct 2014 A1
20140326787 Barten Nov 2014 A1
20140330606 Paget et al. Nov 2014 A1
20140332590 Wang et al. Nov 2014 A1
20140344943 Todeschini et al. Nov 2014 A1
20140346233 Liu et al. Nov 2014 A1
20140351317 Smith et al. Nov 2014 A1
20140353373 Van et al. Dec 2014 A1
20140361073 Qu et al. Dec 2014 A1
20140361082 Xian et al. Dec 2014 A1
20140362184 Jovanovski et al. Dec 2014 A1
20140363015 Braho Dec 2014 A1
20140369511 Sheerin et al. Dec 2014 A1
20140374483 Lu Dec 2014 A1
20140374485 Xian et al. Dec 2014 A1
20150001301 Ouyang Jan 2015 A1
20150001304 Todeschini Jan 2015 A1
20150003673 Fletcher Jan 2015 A1
20150009338 Laffargue et al. Jan 2015 A1
20150009610 London et al. Jan 2015 A1
20150014416 Kotlarsky et al. Jan 2015 A1
20150021397 Rueblinger et al. Jan 2015 A1
20150028102 Ren et al. Jan 2015 A1
20150028103 Jiang Jan 2015 A1
20150028104 Ma et al. Jan 2015 A1
20150029002 Yeakley et al. Jan 2015 A1
20150032709 Maloy et al. Jan 2015 A1
20150039309 Braho et al. Feb 2015 A1
20150039878 Barten Feb 2015 A1
20150040378 Saber et al. Feb 2015 A1
20150048168 Fritz et al. Feb 2015 A1
20150049347 Laffargue et al. Feb 2015 A1
20150051992 Smith Feb 2015 A1
20150053766 Iavens et al. Feb 2015 A1
20150053768 Wang et al. Feb 2015 A1
20150053769 Thuries et al. Feb 2015 A1
20150060544 Feng et al. Mar 2015 A1
20150062366 Liu et al. Mar 2015 A1
20150063215 Wang Mar 2015 A1
20150063676 Lloyd et al. Mar 2015 A1
20150069130 Gannon Mar 2015 A1
20150071819 Todeschini Mar 2015 A1
20150083800 Li et al. Mar 2015 A1
20150086114 Todeschini Mar 2015 A1
20150088522 Hendrickson et al. Mar 2015 A1
20150096872 Woodburn Apr 2015 A1
20150099557 Pettinelli et al. Apr 2015 A1
20150100196 Hollifield Apr 2015 A1
20150102109 Huck Apr 2015 A1
20150115035 Meier et al. Apr 2015 A1
20150127791 Kosecki et al. May 2015 A1
20150128116 Chen et al. May 2015 A1
20150129659 Feng et al. May 2015 A1
20150133047 Smith et al. May 2015 A1
20150134470 Hejl et al. May 2015 A1
20150136851 Iarding et al. May 2015 A1
20150136854 Lu et al. May 2015 A1
20150142492 Kumar May 2015 A1
20150144692 Hejl May 2015 A1
20150144698 Teng et al. May 2015 A1
20150144701 Xian et al. May 2015 A1
20150149946 Benos et al. May 2015 A1
20150161429 Xian Jun 2015 A1
20150169925 Chen et al. Jun 2015 A1
20150169929 Williams et al. Jun 2015 A1
20150178523 Gelay et al. Jun 2015 A1
20150178534 Jovanovski et al. Jun 2015 A1
20150178535 Bremer et al. Jun 2015 A1
20150178536 Hennick et al. Jun 2015 A1
20150178537 El et al. Jun 2015 A1
20150181093 Zhu et al. Jun 2015 A1
20150181109 Gillet et al. Jun 2015 A1
20150186703 Chen et al. Jul 2015 A1
20150193268 Layton et al. Jul 2015 A1
20150193644 Kearney et al. Jul 2015 A1
20150193645 Colavito et al. Jul 2015 A1
20150199957 Funyak et al. Jul 2015 A1
20150204671 Showering Jul 2015 A1
20150210199 Payne Jul 2015 A1
20150220753 Zhu et al. Aug 2015 A1
20150236984 Sevier Aug 2015 A1
20150254485 Feng et al. Sep 2015 A1
20150261643 Caballero et al. Sep 2015 A1
20150302859 Aguilar et al. Oct 2015 A1
20150312780 Wang et al. Oct 2015 A1
20150324623 Powilleit Nov 2015 A1
20150327012 Bian et al. Nov 2015 A1
20160014251 Hejl Jan 2016 A1
20160040982 Li et al. Feb 2016 A1
20160042241 Todeschini Feb 2016 A1
20160057230 Todeschini et al. Feb 2016 A1
20160092805 Geisler et al. Mar 2016 A1
20160109219 Ackley et al. Apr 2016 A1
20160109220 Laffargue et al. Apr 2016 A1
20160109224 Thuries et al. Apr 2016 A1
20160112631 Ackley et al. Apr 2016 A1
20160112643 Laffargue et al. Apr 2016 A1
20160117627 Raj et al. Apr 2016 A1
20160124516 Schoon et al. May 2016 A1
20160125217 Todeschini May 2016 A1
20160125342 Miller et al. May 2016 A1
20160125873 Braho et al. May 2016 A1
20160133253 Braho et al. May 2016 A1
20160171720 Todeschini Jun 2016 A1
20160178479 Goldsmith Jun 2016 A1
20160180678 Ackley et al. Jun 2016 A1
20160189087 Morton et al. Jun 2016 A1
20160227912 Oberpriller et al. Aug 2016 A1
20160232891 Pecorari Aug 2016 A1
20160253023 Aoyama et al. Sep 2016 A1
20160292477 Bidwell Oct 2016 A1
20160294779 Yeakley et al. Oct 2016 A1
20160306769 Kohtz et al. Oct 2016 A1
20160314276 Wilz et al. Oct 2016 A1
20160314294 Kubler et al. Oct 2016 A1
20160377414 Thuries et al. Dec 2016 A1
20170011735 Kim et al. Jan 2017 A1
20170060320 Li et al. Mar 2017 A1
20170069288 Kanishima et al. Mar 2017 A1
20170076720 Gopalan et al. Mar 2017 A1
20170200108 Au et al. Jul 2017 A1
20180091654 Miller et al. Mar 2018 A1
20180204128 Avrahami et al. Jul 2018 A1
20190114572 Gold et al. Apr 2019 A1
20190124388 Schwartz Apr 2019 A1
20190250882 Swansey et al. Aug 2019 A1
20190354911 Alaniz et al. Nov 2019 A1
20190370721 Issac Dec 2019 A1
20200265828 Hendrickson et al. Aug 2020 A1
20200311650 Xu et al. Oct 2020 A1
20210117901 Raj et al. Apr 2021 A1
20220013137 Hardek Jan 2022 A1
Foreign Referenced Citations (39)
Number Date Country
3005795 Feb 1996 AU
9404098 Apr 1999 AU
3372199 Oct 1999 AU
0867857 Sep 1998 EP
0905677 Mar 1999 EP
1011094 Jun 2000 EP
1377000 Jan 2004 EP
3009968 Apr 2016 EP
63-179398 Jul 1988 JP
64-004798 Jan 1989 JP
04-296799 Oct 1992 JP
06-059828 Mar 1994 JP
06-095828 Apr 1994 JP
06-130985 May 1994 JP
06-161489 Jun 1994 JP
07-013591 Jan 1995 JP
07-199985 Aug 1995 JP
11-175096 Jul 1999 JP
2000-181482 Jun 2000 JP
2001-042886 Feb 2001 JP
2001-343992 Dec 2001 JP
2001-343994 Dec 2001 JP
2002-328696 Nov 2002 JP
2003-177779 Jun 2003 JP
2004-126413 Apr 2004 JP
2004-334228 Nov 2004 JP
2005-173157 Jun 2005 JP
2005-331882 Dec 2005 JP
2006-058390 Mar 2006 JP
9602050 Jan 1996 WO
9916050 Apr 1999 WO
9950828 Oct 1999 WO
0211121 Feb 2002 WO
2005119193 Dec 2005 WO
2006031752 Mar 2006 WO
2013163789 Nov 2013 WO
2013173985 Nov 2013 WO
2014019130 Feb 2014 WO
2014110495 Jul 2014 WO
Non-Patent Literature Citations (139)
Entry
US 8,548,242 B1, 10/2013, Longacre (withdrawn)
US 8,616,454 B2, 12/2013, Havens et al. (withdrawn)
Lukowicz, Paul, et al. “Wearit@ work: Toward real-world industrial wearable computing.” IEEE Pervasive Computing 6.4 (2007): 8-13. (Year: 2007).
J. Odell and K. Mukerjee, “Architecture, User Interface, and Enabling Technology in Windows Vista's Speech Systems,” in IEEE Transactions on Computers, vol. 56, No. 9, pp. 1156-1168, Sep. 2007, doi: 10.1109/TC.2007.1065. (Year: 2007).
W. Kurschl, S. Mitsch, R. Prokop and J. Schonbock, “Gulliver—A Framework for Building Smart Speech-Based Applications,” 2007 40th Annual Hawaii International Conference on System Sciences (HICSS'07), Waikoloa, HI, USA, 2007, pp. 30-30, doi: 10.1109/HICSS.2007.243. (Year: 2007).
S. Furui, “Speech recognition technology in the ubiquitous/wearable computing environment,” 2000 IEEE International Conference on Acoustics, Speech, and Signal Processing. Proceedings (Cat. No. 00CH37100), Istanbul, Turkey, 2000, pp. 3735-3738 vol.6, doi: 10.1109/ICASSP.2000.860214. (Year: 2000).
V. Stanford, “Wearable computing goes live in industry,” in IEEE Pervasive Computing, vol. 1, No. 4, pp. 14-19, Oct.-Dec. 2002, doi: 10.1109/MPRV.2002.1158274. (Year: 2002).
Roger G. Byford, “Voice System Technologies and Architecture”, A White Paper by Roger G. Byford CTO, Vocollect published May 10, 2003. Retrieved from Internet archive: Wayback Machine. (n.d.). https://web.archive.org/web/20030510234253/http://www.vocollect.com/products/VoiceTechWP.pdf (Year: 2003).
Voxware, Inc., “Voxware VMS, Because nothing short of the best will do,” Copyright 2019, retrieved from the Internet at <https://www.voxware.com/wp-content/uploads/2019/01/Voxware-VMS-w.pdf> on May 26, 2023, 2 pages.
Voxware, Inc., “Voxware VoiceLogistics, Voice Solutions for Logistics Excellence,” Product Literature, Copyright 2005, captured on Mar. 14, 2006 by the Internet Archive WayBack Machine, retrieved from the Internet at <https://web.archive.org/web/20060314191653/http://www.voxware.com/media/pdf/Product_Literature_VoiceLogistics_03.pdf> on May 26, 2023, 5 pages.
Voxware, Inc., “Voxware VoxConnect, Make Integrating Voice and WMS Fast and Fluid,” Brochure, Copyright 2019, retrieved from the Internet at <https://www.voxware.com/wp-content/uploads/2019/01/Voxware-VoxConnect-w.pdf> on May 25, 2023, 2 pages.
Voxware, Inc., “Voxware VoxPilot, Get 10-15% more productivity and drive critical decisions with insights from VoxPilot,” Copyright 2019, retrieved from the Internet at <https://www.voxware.com/wp-content/uploads/2019/01/Voxware-VoxPilot-w.pdf> on May 26, 2023, 2 pages.
Voxware, Inc., v. Honeywell International Inc., Hand Held Products, Inc., Intermec Inc., and Vocollect, Inc., Jury Trial Demanded: First Amended Complaint for Declaratory Judgment of No Patent Infringement, Patent Invalidity, and Patent Unenforceability, Violation of Antitrust Laws, Deceptive Trade Practices, Unfair Competition, and Tortious Interference with Prospective Business Relations, Apr. 26, 2023, 66 pages, In the U.S. District Court for the District of Delaware, C.A. No. 23-052 (RGA).
Voxware, Inc., v. Honeywell International Inc., Hand Held Products, Inc., Intermec Inc., and Vocollect, Inc., 1Demand for Jury Trial: Defendants Answer, Defenses, and Counterclaims, Mar. 29, 2023, 43 pages, In the U.S. District Court for the District of Delaware, C.A. No. 23-052 (RGA).
Voxware, Inc., v. Honeywell International Inc., Hand Held Products, Inc., Intermec Inc., and Vocollect, Inc., Jury Trial Demanded: Complaint for Declaratory Judgment of No Patent Infringement, Patent Invalidity, and Patent Unenforceability, Violation of Antitrust Laws, Deceptive Trade Practices, Unfair Competition, and Tortious Interference with Prospective Business Relations, Jan. 17, 2023, 44 pages, In the U.S. District Court for the District of Delaware, C.A. No. 23-052 (RGA).
voxware.com, “Voice Directed Picking Software for Warehouses”, retrieved from the Internet at: <https://www.voxware.com/voxware-vms/> on May 25, 2023, 11 pages.
Worldwide Testing Services (Taiwan) Co., Ltd., Registration No. W6D21808-18305-FCC, FCC ID: SC6BTH430, External Photos, Appendix pp. 2-5, retrieved from the Internet at: <https://fccid.io/SC6BTH430/External-Photos/External-Photos-4007084.pdf> on May 25, 2023, 4 pages.
Y. Muthusamy, R. Agarwal, Yifan Gong and V. Viswanathan, “Speech-enabled information retrieval in the automobile environment,” 1999 IEEE International Conference on Acoustics, Speech, and Signal Processing. Proceedings. ICASSP99 (Cat. No. 99CH36258), 1999, pp. 2259-2262 vol. 4. (Year: 1999).
A. Gupta, N. Patel and S. Khan, “Automatic speech recognition technique for voice command,” 2014 International Conference on Science Engineering and Management Research (ICSEMR), 2014, pp. 1-5, doi: 10.1109/ICSEMR.2014.7043641. (Year: 2014).
A. L. Kun, W. T. Miller and W. H. Lenharth, “Evaluating the user interfaces of an integrated system of in-car electronic devices,” Proceedings. 2005 IEEE Intelligent Transportation Systems, 2005., 2005, pp. 953-958. (Year: 2005).
A. L. Kun, W. T. Miller, A. Pelhe and R. L. Lynch, “A software architecture supporting in-car speech interaction,” IEEE Intelligent Vehicles Symposium, 2004, 2004, pp. 471-476. (Year: 2004).
Abel Womack, “Voxware announces sales partnership with Buton eBusiness Solutions”, retrieved from the Internet at <https://www.abelwomack.com/voxware-announces-sales-partnership-with-buton-ebusiness-solutions/> on May 26, 2023, 2 pages.
Advisory Action (PTOL-303) dated Oct. 18, 2022 for U.S. Appl. No. 17/111,164, 3 page(s).
Annex to the communication dated Jan. 3, 2019 for EP Application No. 15189657, 1 page(s).
Annex to the communication dated Jul. 6, 2018 for EP Application No. 15189657, 6 page(s).
Annex to the communication dated Nov. 19, 2018 for EP Application No. 15189657, 2 page(s).
Applicant Initiated Interview Summary (PTOL-413) dated Jun. 15, 2020 for U.S. Appl. No. 15/220,584.
Chengyi Zheng and Yonghong Yan, “Improving Speaker Adaptation by Adjusting the Adaptation Data Set”; 2000 IEEE International Symposium on Intelligent Signal Processing and Communication Systems, Nov. 5-8, 2000.
Christensen, “Speaker Adaptation of Hidden Markov Models using Maximum Likelihood Linear Regression”, Thesis, Aalborg University, Apr. 1996.
D. Barchiesi, D. Giannoulis, D. Stowell and M. D. Plumbley, “Acoustic Scene Classification: Classifying environments from the sounds they produce,” in IEEE Signal Processing Magazine, vol. 32, No. 3, pp. 16-34, May 2015, doi: 10.1109/MSP.2014.2326181. (Year: 2015).
DC Velocity Staff, “Voxware shows Intellestra supply chain analytics tool”, dated Apr. 6, 2016, retrieved from the Internet at <https://www.dcvelocity.com/articles/31486-voxware-shows-intellestra-supply-chain-analytics-tool> on May 26, 2023, 7 pages.
Decision to Refuse European Application No. 15189657.8, dated Jan. 3, 2019, 10 pages.
Decision to Refuse European Application No. 15189657.9, dated Jul. 6, 2018, 2 pages.
E. Erzin, Y. Yemez, A. M. Tekalp, A. Ercil, H. Erdogan and H. Abut, “Multimodal person recognition for human-vehicle interaction,” in IEEE MultiMedia, vol. 13, No. 2, pp. 18-31, Apr.-Jun. 2006. (Year: 2006).
Examiner initiated interview summary (PTOL-413B) dated Apr. 11, 2017 for U.S. Appl. No. 14/561,648, 1 page(s).
Examiner initiated interview summary (PTOL-413B) dated Sep. 14, 2018 for U.S. Appl. No. 15/220,584, 1 page(s).
Examiner Interview Summary Record (PTOL-413) dated Mar. 26, 2021 for U.S. Appl. No. 16/695,555.
Examiner Interview Summary Record (PTOL-413) dated Oct. 18, 2022 for U.S. Appl. No. 17/111,164, 1 page(s).
Final Rejection dated Apr. 13, 2023 for U.S. Appl. No. 16/869,228, 45 page(s).
Final Rejection dated Aug. 7, 2019 for U.S. Appl. No. 15/635,326, 37 page(s).
Final Rejection dated Jul. 25, 2022 for U.S. Appl. No. 17/111,164, 22 page(s).
Final Rejection dated Jun. 5, 2019 for U.S. Appl. No. 15/220,584, 14 page(s).
Final Rejection dated May 7, 2020 for U.S. Appl. No. 14/880,482.
Final Rejection dated May 30, 2019 for U.S. Appl. No. 14/880,482.
Jie Yi, Kei Miki, Takashi Yazu, Study of Speaker Independent Continuous Speech Recognition, Oki Electric Research and Development, Oki Electric Industry Co., Ltd., Apr. 1, 1995, vol. 62, No. 2, pp. 7-12.
Kellner, A., et al., Strategies for Name Recognition in Automatic Directory Assistance Systems, Interactive Voice Technology for Telecommunication Application, IVTTA '98 Proceedings, 1998 IEEE 4th Workshop, Sep. 29, 1998 Submited previously in related application prosecution.
Marc Glassman, Inc. Deploys Vocollect Voice on Psion Teklogix Workabout Pro; HighJump WMS Supports Open Voice Platform PR Newswire [New York] Jan. 8, 2007 (Year: 2007).
Material Handling Wholesaler, “Buton and Voxware announce value-added reseller agreement,” retrieved from the Internet at <https://www.mhwmag.com/shifting-gears/buton-and-voxware-announce-value-added-reseller-agreement/> on May 26, 2023, 4 pages.
Minutes of the Oral Proceeding before the Examining Division received for EP Application No. 15189657.8, dated Jan. 3, 2019, 16 pages.
Mokbel, “Online Adaptation of HMMs to Real-Life Conditions: A Unified Framework”, IEEE Trans. on Speech and Audio Processing, May 2001.
Non-Final Rejection dated Feb. 4, 2022 for U.S. Appl. No. 17/111,164, 21 page(s).
Non-Final Rejection dated Jan. 18, 2023 for U.S. Appl. No. 17/111,164.
Non-Final Rejection dated Mar. 1, 2019 for U.S. Appl. No. 15/220,584, 12 page(s).
Non-Final Rejection dated Mar. 21, 2019 for U.S. Appl. No. 15/635,326, 31 page(s).
Non-Final Rejection dated Mar. 26, 2021 for U.S. Appl. No. 16/695,555.
Non-Final Rejection dated Nov. 1, 2018 for U.S. Appl. No. 14/880,482.
Non-Final Rejection dated Nov. 1, 2019 for U.S. Appl. No. 15/635,326, 8 page(s).
Non-Final Rejection dated Nov. 14, 2019 for U.S. Appl. No. 14/880,482.
Non-Final Rejection dated Oct. 4, 2021 for U.S. Appl. No. 17/111,164, 19 page(s).
Non-Final Rejection dated Oct. 14, 2022 for U.S. Appl. No. 16/869,228, 42 page(s).
Non-Final Rejection dated Oct. 31, 2022 for U.S. Appl. No. 17/449,213, 5 page(s).
Non-Final Rejection dated Sep. 8, 2016 for U.S. Appl. No. 14/561,648, 20 page(s).
Notice of Allowance and Fees Due (PTOL-85) dated Apr. 11, 2017 for U.S. Appl. No. 14/561,648.
Notice of Allowance and Fees Due (PTOL-85) dated Aug. 15, 2014 for U.S. Appl. No. 13/474,921.
Notice of Allowance and Fees Due (PTOL-85) dated Feb. 10, 2020 for U.S. Appl. No. 15/635,326.
Notice of Allowance and Fees Due (PTOL-85) dated Feb. 28, 2023 for U.S. Appl. No. 17/449,213.
Notice of Allowance and Fees Due (PTOL-85) dated Jun. 15, 2020 for U.S. Appl. No. 15/220,584.
Notice of Allowance and Fees Due (PTOL-85) dated Jun. 20, 2023 for U.S. Appl. No. 17/449,213, 10 page(s).
U.S. Patent Application for Multipurpose Optical Reader, filed May 14, 2014 (Jovanovski et al.); 59 pages, U.S. Appl. No. 14/277,337, abandoned.
Notice of Allowance and Fees Due (PTOL-85) dated Jun. 28, 2021 for U.S. Appl. No. 16/695,555, 9 page(s).
Notice of Allowance and Fees Due (PTOL-85) dated Jun. 29, 2023 for U.S. Appl. No. 16/869,228, 10 page(s).
Notice of Allowance and Fees Due (PTOL-85) dated Mar. 1, 2017 for U.S. Appl. No. 14/561,648, 9 page(s).
Notice of Allowance and Fees Due (PTOL-85) dated Mar. 11, 2020 for U.S. Appl. No. 15/220,584, 9 page(s).
Notice of Allowance and Fees Due (PTOL-85) dated May 20, 2020 for U.S. Appl. No. 15/635,326.
Notice of Allowance and Fees Due (PTOL-85) dated Sep. 4, 2019 for U.S. Appl. No. 15/220,584, 9 page(s).
Notice of Allowance and Fees Due (PTOL-85) dated Sep. 23, 2020 for U.S. Appl. No. 14/880,482.
Office Action in related European Application No. 15189657.8 dated May 12, 2017, pp. 1-6.
Osamu Segawa, Kazuya Takeda, An Information Retrieval System for Telephone Dialogue in Load Dispatch Center, IEEJ Trans, EIS, Sep. 1, 2005, vol. 125, No. 9, pp. 1438-1443. (Abstract Only).
Result of Consultation (Interview Summary) received for EP Application No. 15189657.8, dated Nov. 19, 2018, 4 pages.
Roberts, Mike, et al., “Intellestra: Measuring What Matters Most,” Voxware Webinar, dated Jun. 22, 2016, retrieved from the Internet at <https://vimeo.com/195626331> on May 26, 2023, 4 pages.
Search Report and Written Opinion in counterpart European Application No. 15189657.8 dated Feb. 5, 2016, pp. 1-7.
Silke Goronzy, Krzysztof Marasek, Ralf Kompe, Semi-Supervised Speaker Adaptation, in Proceedings of the Sony Research Forum 2000, vol. 1, Tokyo, Japan, 2000.
Smith, Ronnie W., An Evaluation of Strategies for Selective Utterance Verification for Spoken Natural Language Dialog, Proc. Fifth Conference on Applied Natural Language Processing (ANLP), 1997, 41-48.
Summons to attend Oral Proceedings for European Application No. 15189657.9, dated Jan. 3, 2019, 2 pages.
Summons to attend Oral Proceedings pursuant to Rule 115(1) EPC received for EP Application No. 15189657.8, dated Jul. 6, 2018, 11 pages.
T. B. Martin, “Practical applications of voice input to machines,” in Proceedings of the IEEE, vol. 64, No. 4, pp. 487-501, Apr. 1976, doi: 10.1109/PROC.1976.10157. (Year: 1976).
T. Kuhn, A. Jameel, M. Stumpfle and A. Haddadi, “Hybrid in-car speech recognition for mobile multimedia applications,” 1999 IEEE 49th Vehicular Technology Conference (Cat. No. 99CH36363), 1999, pp. 2009-2013 vol.3. (Year: 1999).
U.S. Patent Application for a Laser Scanning Module Employing an Elastomeric U-Hinge Based Laser Scanning Assembly, filed Feb. 7, 2012 (Feng et al.), U.S. Appl. No. 13/367,978, abandoned.
U.S. Patent Application for Indicia Reader filed Apr. 1, 2015 (Huck), U.S. Appl. No. 14/676,109, abandoned.
U.S. Patent Application for Multifunction Point of Sale Apparatus With Optical Signature Capture filed Jul. 30, 2014 (Good et al.); 37 pages; now abandoned, U.S. Appl. No. 14/446,391.
U.S. Patent Application for Terminal Having Illumination and Focus Control filed May 21, 2014 (Liu et al.); 31 pages; now abandoned, U.S. Appl. No. 14/283,282.
US Patent Application for “Distinguishing User Speech from Background Speech in Speech-Dense Environments”, Unpublished (filed Jun. 2, 2023), (David D. Hardek, Inventor), (Vocollect, Inc., Assignee), U.S. Appl. No. 18/328,034.
US Patent Application for “Systems and Methods for Worker Resource Management”, Unpublished (filed Jun. 1, 2023), (Mohit Raj, Inventor), (Vocollect, Inc., Assignee), U.S. Appl. No. 18/327,673.
Voxware Inc., “Voxware Headsets, Portfolio, Features & Specifications,” Brochure, Sep. 2011, retrieved from the Internet at <http://webshop.advania.se/pdf/9FEB1CF7-2B40-4A63-8644-471F2D282B65.pdf> on May 25, 2023, 4 pages.
Voxware, “People . . . Power . . . Performance,” Product Literature, captured Mar. 14, 2006 by the Internet Archive WayBack Machine, retrieved from the Internet at <https://web.archive.org/web/20060314191729/http://www.voxware.com/media/pdf/Product_Literature_Company_02.pdf> on May 26, 2023, 3 pages.
Voxware, “The Cascading Benefits of Multimodal Automation in Distribution Centers,” retrieved from the Internet at <https://www.voxware.com/wp-content/uploads/2020/12/Voxware-Cascading-Benefits.pdf> on May 26, 2023, 14 pages.
Voxware, “Voice in the Warehouse: The Hidden Decision, Making the Open and Shut Case”, White Paper, Copyright 2008, retrieved from the Internet at: <https://www.voxware.com/wp-content/uploads/2016/11/Voice_in_the_Warehouse-The_Hidden_Decision.pdf> on May 25, 2023, 3 pages.
Voxware, “Voice-Directed Results, VoiceLogistics Helps Dunkin' Donuts Deliver,” Case Study, captured on Oct. 15, 2006 by the Internet Archive WayBack Machine, retrieved from the Internet at <https://web.archive.org/web/20061015223800/http://www.voxware.com/fileadmin/Download_Center/Case_Studies/VoiceLogistics_Helps_Dunkin_Donuts_Deliver.pdf> on May 26, 2023, 3 pages.
Voxware, “VoiceLogistics Results, Reed Boardall Doesn't Leave Customers Out in the Cold!,” Case Study, captured on Oct. 15, 2006 by the Internet Archive WayBack Machine, retrieved from the Internet at <https://web.archive.org/web/20061015223031/http://www.voxware.com/fileadmin/Download_Center/Case_Studies/Reed_Boardall_Doesn_t_Leave_Customers_in_the_Cold.pdf> on May 26, 2023, 3 pages.
Voxware, “VoxConnect, Greatly simplify the integration of your voice solution,” retrieved from the Internet at <https://www.voxware.com/voxware-vms/voxconnect/> on May 26, 2023, 4 pages.
Voxware, “VoxPilot, Supply Chain Analytics,” retrieved from the Internet at <https://www.voxware.com/supply-chain-analytics/> on May 26, 2023, 8 pages.
Voxware, “Voxware Intellestra provides real-time view of data across supply chain,” Press Release, dated Apr. 14, 2015, retrieved from the Internet at <https://www.fleetowner.com/refrigerated-transporter/cold-storage-logistics/article/21229403/voxware-intellestra-provides-realtime-view-of-data-across-entire-supply-chain> on May 26, 2023, 2 pages.
Voxware, “Voxware Intellestra, What if supply chain managers could see the future?”, Brochure, retrieved from the Internet at <https://www.voxware.com/wp-content/uploads/2017/04/Voxware-Intellestra-w.pdf> on May 26, 2023, 2 pages.
Voxware, “Why Cloud VMS, All of voice's benefits with a faster ROI: Cloud VMS,” retrieved from the Internet at <https://www.voxware.com/voxware-vms/why-cloud-vms/> on May 26, 2023, 4 pages.
Voxware, Inc., “4-Bay Smart Charger,” Product Literature, Copyright 2005, captured on Mar. 14, 2006 by the Internet Archive WayBack Machine, retrieved from the Internet at <https://web.archive.org/web/20060314191719/http://www.voxware.com/media/pdf/Smart_Charger_01_pdf> on May 26, 2023, 3 pages.
Voxware, Inc., “Bluetooth Modular Headset, Single-Ear (Mono) BT HD, BTH430 Quick Start Guide v.1” retrieved from the Internet at <https://usermanual.wiki/Voxware/BTH430/pdf> on May 25, 2023, 12 pages.
Voxware, Inc., “Certified Client Devices for Voxware VMS Voice Solutions,” Product Sheets, Effective Feb. 2012, retrieved from the Internet at <https://docplayer.net/43814384-Certified-client-devices-for-voxware-vms-voice-solutions-effective-Feb. 2012.html> on May 26, 2023, 30 pages.
Voxware, Inc., “Dispelling Myths About Voice in the Warehouse: Maximizing Choice and Control Across the 4 Key Components of Every Voice Solution”, White Paper, Copyright 2012, retrieved from the Internet at <https://www.voxware.com/wp-content/uploads/2016/11/Dispelling_Myths.pdf> on May 25, 2023, 6 pages.
Voxware, Inc., “Innovative Voice Solutions Powered by Voxware, Broadening the Role of Voice in Supply Chain Operations,” Product Literature, Copyright 2005, captured on Mar. 14, 2006 by the Internet Archive WayBack Machine, retrieved from the Internet at <https://web.archive.org/web/20060314191628/http://www.voxware.com/media/pdf/VoxBrowserVoxManager_02.pdf> on May 26, 2023, 5 pages.
Voxware, Inc., “Intellestra BI & Analytics,” Product Sheet, Copyright 2015, retrieved form the Internet at <https://www.voxware.com/wp-content/uploads/2016/12/Voxware_Intellestra_Product_Overview.pdf> on May 26, 2023, 1 page.
Voxware, Inc., “Is Your Voice Solution Engineered for Change?”, White Paper, Copyright 2012, retrieved from the Internet at <https://www.voxware.com/wp-content/uploads/2016/11/WhitePaper_Engineered_For_Change.pdf> on May 25, 2023, 9 pages.
Voxware, Inc., “MX3X—VoiceLogistics on a Versatile Platform”, Product Literature, Copyright 2004, captured on Mar. 14, 2006 by the Internet Archive WayBack Machine, retrieved from the Internet at <https://web.archive.org/web/20060314191822/http://www.voxware.com/media/pdf/LXE_MX3X_01.pdf> on May 26, 2023, 2 pages.
Voxware, Inc., “Optimizing Work Performance, Voice-Directed Operations in the Warehouse,” White Paper, Copyright 2012, retrieved from the Internet at <https://www.voxware.com/wp-content/uploads/2016/11/WhitePaper_OptimizingWorkerPerformance.pdf> on May 25, 2023, 6 pages.
Voxware, Inc., “VLS-410 >>Wireless Voice Recognition<<,” Product Literature, Copyright 2004, Captured on Mar. 14, 2006 by the Internet Archive WayBack Machine, retrieved from the Internet at <https://web.archive.org/web/20060314191604/http://www.voxware.com/media/pdf/VLS-410_05.pdf> on May 26, 2023, 3 pages.
Voxware, Inc., “Voice in the Cloud: Opportunity for Warehouse Optimization,” White Paper, Copyright 2012, retrieved from the Internet at <https://www.voxware.com/wp-content/uploads/2016/11/Vox_whitepaper_VoiceCloud.pdf> on May 26, 2023, 7 pages.
Voxware, Inc., “Voice in the Warehouse: Does the Recognizer Matter? Why You Need 99.9% Recognition Accuracy,” White Paper, Copyright 2010, retrieved from the Internet at <https://www.voxware.com/wp-content/uploads/2016/11/WhitePaper_Recognizer.pdf> on May 25, 2023, 7 pages.
Voxware, Inc., “VoiceLogistics, Technology Architecture,” Product Literature, Copyright 2003, captured Mar. 14, 2006 by the Internet Archive WayBack Machine, retrieved from the Internet at <https://web.archive.org/web/20060314191745/http://www.voxware.com/media/pdf/Product_Literature_VLS_Architechture_02.pdf> on May 26, 2023, 5 pages.
Voxware, Inc., “VoxPilot, Active Decision Support for Warehouse Voice,” Brochure, Copyright 2012, retrieved from the Internet at <https://voxware.com/wp-content/uploads/2016/11/Solutions_VoxApp_VoxPilot_2.pdf> on May 26, 2023, 2 pages.
Voxware, Inc., “Voxware Integrated Speech Engine Adapts to Your Workforce and Your Warehouse,” Brochure, Copyright 2021, retrieved from the Internet at <https://www.voxware.com/wp-content/uploads/2016/11/Vox_product_VISE_Recognition_Engine.pdf> on May 25, 2023, 2 pages.
Examiner Interview Summary Record (PTOL-413) dated Aug. 9, 2023 for U.S. Appl. No. 18/328,034, 1 page(s).
Notice of Allowance and Fees Due (PTOL-85) dated Aug. 9, 2023 for U.S. Appl. No. 18/328,034, 10 page(s).
Non-Final Rejection dated Aug. 17, 2023 for U.S. Appl. No. 18/327,673, 25 page(s).
Exhibit 16—U.S. Pat. No. 6,662,163 (“Albayrak”), Initial Invalidity Chart for U.S. Pat. 8,914,290 (the “290 Patent”), Plaintiff's Initial Invalidity Contentions, Aug. 29, 2023, Voxware, Inc. , v. Honeywell International Inc. et. al., C.A. No. 23-052-RGA (D. Del), 53 pages
Exhibit 17—2012 Vocollect Voice Solutions Brochure in view of 2012 VoiceArtisan Brochure, in further view of Aug. 2013 VoiceConsole 5.0 Implementation Guide, and in further view of 2011 VoiceConsole Brochure, Initial Invalidity Chart for U.S. Pat. No. 10,909,490 (the “490 Patent”), Plaintiff's Initial Invalidity Contentions, Aug. 29, 2023, Voxware, Inc. , v. Honeywell International Inc. et al., C.A. No. 23-052-RGA (D. Del), 72 pages.
Exhibit 18—Vocollect's Pre-Oct. 15, 2013 Vocollect Voice Solution, Initial Invalidity Chart for U.S. Pat. No. 10,909,490 (the “490 Patent”), Plaintiff's Initial Invalidity Contentions, Aug. 29, 2023, Voxware, Inc., v. Honeywell International Inc. et al., C.A. No. 23-052-RGA (D. Del), 76 pages.
Exhibit 21—Vocollect's Pre-Feb. 4, 2004 Talkman Management System System, Initial Invalidity Chart for U.S. Pat. 11, 158,336 (the “'336 Patent”), Plaintiff's Initial Invalidity Contentions, Aug. 29, 2023, Voxware, Inc. , v. Honeywell International Inc. et al., C.A. No. 23-052-RGA (D. Del), 85 pages.
Exhibit 22—the Talkman T2 Manual, Initial Invalidity Chart for U.S. Pat. No. 11,158,336 (the “336 Patent”), Plaintiff's Initial Invalidity Contentions, Aug. 29, 2023, Voxware, Inc., v. Honeywell International Inc. et al., C.A. No. 23-052-RGA (D. Del), 86 pages.
Exhibit VOX001914—Voxware VLS-410 Wireless Voice Recognition, brochure, copyright 2004, Plaintiff's Initial Invalidity Contentions, Aug. 29, 2023, Voxware, Inc. , v. Honeywell International Inc. et al., C.A. No. 23-052-RGA (D. Del), 2 pages.
Exhibit VOX001917—Voxbeans User Manual, Version 1, Sep. 3, 2004, Plaintiff's Initial Invalidity Contentions, Aug. 29, 2023, Voxware, Inc. , v. Honeywell International Inc. et al., C.A. No. 23-052-RGA (D. Del), 146 pages.
Exhibit VOX002498—Appendix L: Manual, Talkman System, FCC: Part 15.247, FCC ID: MQOTT600-40300, Plaintiff's Initial Invalidity Contentions, Aug. 29, 2023, Voxware, Inc. , v. Honeywell International Inc. et al., C.A. No. 23-052-RGA (D. Del), 187 pages.
Exhibit VOX002692—SEC FORM 10-K for Voxware, Inc., Fiscal Year Ended Jun. 30, 2001, Plaintiff's Initial Invalidity Contentions, Aug. 29, 2023, Voxware, Inc. , v. Honeywell International Inc. et al., C.A. No. 23-052-RGA (D. Del), 66 pages.
Exhibit VOX002833—Vocollect by Honeywell, Vocollect VoiceConsole, brochure, copyright 2011, Plaintiff's Initial Invalidity Contentions, Aug. 29, 2023, Voxware, Inc. , v. Honeywell International Inc. et al., C.A. No. 23-052-RGA (D. Del), 2 pages.
Exhibit VOX002835—Vocollect (Intermec), Vocollect VoiceArtisan, brochure, copyright 2012, Plaintiff's Initial Invalidity Contentions, Aug. 29, 2023, Voxware, Inc. , v. Honeywell International Inc. et al., C.A. No. 23-052-RGA (D. Del), 6 pages.
Exhibit VOX002908—Appendix K: Manual, Vocollect Hardware Documentation, Model No. HBT1000-01, Aug. 2012, Plaintiff's Initial Invalidity Contentions, Aug. 29, 2023, Voxware, Inc. , v. Honeywell International Inc. et al., C.A. No. 23-052-RGA (D. Del), 77 pages.
Exhibit VOX002985—Vocollect Voice Solutions, Transforming Workflow Performance with Best Practice Optimization, brochure, copyright 2012, Plaintiff's Initial Invalidity Contentions, Aug. 29, 2023, Voxware, Inc. , v. Honeywell International Inc. et al., C.A. No. 23-052-RGA (D. Del), 8 pages.
Exhibit VOX002993—Vocollect VoiceConsole 5.0 Implementation Guide, Aug. 2013, Plaintiff's Initial Invalidity Contentions, Aug. 29, 2023, Voxware, Inc. , v. Honeywell International Inc. et al., C.A. No. 23-052-RGA (D. Del), 118 pages.
Final Rejection dated Aug. 30, 2023 for U.S. Appl. No. 17/111,164, 28 page(s).
Non-Final Office Action (Letter Restarting Period for Response) dated Aug. 25, 2023 for U.S. Appl. No. 18/327,673, 26 page(s).
Voxware, Voxware Integrated Speech Engine (VISE), Adapts to Your Workforce and Your Warehouse, brochure, copyright 2012, Plaintiff's Initial Invalidity Contentions, Aug. 29, 2023, Voxware, Inc. , v. Honeywell International Inc. et al., C.A. No. 23-052-RGA (D. Del), 2 pages.
Related Publications (1)
Number Date Country
20230317053 A1 Oct 2023 US
Provisional Applications (1)
Number Date Country
61488587 May 2011 US
Continuations (4)
Number Date Country
Parent 16869228 May 2020 US
Child 18328189 US
Parent 15635326 Jun 2017 US
Child 16869228 US
Parent 14561648 Dec 2014 US
Child 15635326 US
Parent 13474921 May 2012 US
Child 14561648 US