The present invention relates to voice-directed workflow and, more specifically, to a speech recognition system with voice templates that are helped made distinct by a dynamic training analysis.
Voice-directed workflow systems allow workers to communicate verbally with a computer system. These systems may be used in warehouses or distribution centers to improve safety and efficiency for tasks such as picking, receiving, replenishing, and/or shipping.
Voice-directed workflow systems typically require a worker to wear a headset equipped with a microphone and earphone. Voice commands are transmitted to the worker via the earphone and spoken responses from the worker are received by the microphone. In this way, a worker may be directed to perform a task and respond with their progress by speaking established responses into the microphone at certain points in an established workflow dialog.
Speech recognition is part of a voice-directed workflow system. Speech recognition is the translation of spoken words into text/data via a computing device. A computing device configured for speech recognition is known as a speech recognizer.
Speech recognition is a challenging problem for a variety of reasons. First, the speech recognizer must detect speech versus background noise. For example, the speech recognizer must recognize that a sound represents speech rather than a breath. Next, the speech recognizer must compare the speech input to words and/or phrases in a vocabulary typically specific to the application (i.e., application vocabulary). Here, the speech recognizer may use the workflow dialog to help determine what was said.
Often, for a particular workflow dialog, the expected responses are limited to a range of possible responses, or even a single expected response. For example, if a worker is given a picking task with the prompt, “pick two,” and the worker is expected to confirm the picking task with the response “two,” then the speech that occurs after the prompt may be expected to match a voice template for “two.” In general, a workflow has an associated application vocabulary consisting of voice templates for the vocabulary words, sounds, or phrases necessary to carry out the tasks associated with workflow.
Voice templates (i.e., speech templates or templates) are voice patterns for particular words or phrases stored in memory. The voice templates may be specific to a user in speaker-dependent recognition systems. Alternatively, the voice templates may be for all users (i.e., generic) in speaker-independent recognition systems. In either case, the speech recognizer determines how closely the received speech matches a stored voice template to determine what was most likely spoken.
Since everyone's speech may be different, custom voice templates may be created. To create a custom voice template for a word, a user may be prompted (e.g., through a display) to provide speech samples (e.g., by repeatedly saying a word). It is common to require workers new to a voice-directed workflow system to train the system for their voice by creating voice templates for a variety of words and/or sounds.
A problem arises when the voice templates created by a worker are not distinct enough for a speech recognizer to distinguish it from other words in the application vocabulary. For example, some workers may pronounce the word, “five,” and the word, “nine,” similarly. This may result in voice templates created for the word, “five,” that are very similar to the voice template, “nine.”
Voice template similarity may erode the speech recognizer's performance. For example, a worker may be asked to repeat what they have said which may reduce productivity and cause frustration. Errors may also occur as numbers may be transposed (e.g., a 5 recorded when a 9 was intended, or vice-versa).
Therefore, a need exists for analysis during the creation of a voice template (i.e., during training) to insure that a created voice template is not similar to (or does not match with) any other stored voice templates. If a similarity is found, then a user may be prompted to create a new, more distinct, voice template for the word. This dynamic training analysis may improve user experience and accuracy for voice-directed workflow systems.
Accordingly, in one aspect, the present invention embraces a method for creating a voice template for a speech recognition system. The method begins with acquiring multiple samples of a spoken word from a user using the speech recognition system. Here, the spoken word represents a vocabulary word from an application vocabulary stored in a computer-readable memory (i.e., memory). Next, a voice template for the spoken word is created from the multiple samples. This voice template is compared to other voice templates for other words from the application vocabulary, and if the custom voice template for the spoken word is similar to at least one of the other voice templates for the other words, then the user is prompted to create a new voice template for the spoken word. The user is then provided with instructions for adjusting the spoken word to make the new voice template for the spoken word less similar to the other voice templates for the other words.
In some exemplary embodiments, the other voice templates for other words are custom voice templates created for a specific user, while in other embodiments the other voice templates for other words are generic voice templates created for any user.
In still other exemplary embodiments, the instructions for adjusting the spoken word may include prompts to help a user enunciate the spoken word more distinctly, while in others, the user may be prompted (e.g., by information displayed on a screen) to utter an alternative word to represent the spoken word. In some cases, the alternative word may be a particular alternative word present to the user, while in others the user may be presented with a set of possible words from which to choose the alternative word.
In another aspect, the present invention embraces a method for training a speaker-independent speech recognition system. The method begins by acquiring a speech sample of a word from an application vocabulary using the speaker-independent speech recognition system. This speech sample is compared to generic voice templates in the application vocabulary, and if the speech sample matches more than one of the generic voice templates, then the user is prompted to create a custom voice template for a substitute word. The speaker-independent speech recognition system is then trained on the substitute word. The resulting custom voice template for the substitute word is then stored in the application vocabulary, replacing the generic voice template for the word. If, on the other hand, the comparison of the speech sample to the generic voice templates in the application vocabulary does not find a match to more than one generic voice templates then no training is required and the speaker-independent speech recognition system used the generic voice template for the word.
In an exemplary embodiment of the method for training a speaker-independent speech recognition system, the prompts for a user to create a custom voice template for a substitute word includes a list of possible substitute words.
In some exemplary embodiments of the method for training a speaker-independent speech recognition system, the generic voice templates include voice templates for other words that sound similar to the word, while others the generic voice templates include voice templates for other words from the same class of words.
In some exemplary embodiments of the method for training a speaker-independent speech recognition system, the substitute word includes a different enunciation of the word, while in others the substitute word includes a new word chosen by a user that is different from the word.
In another aspect, the present invention embraces a method for re-training a speech recognition system. The method begins with acquiring a speech sample of a word using the speech recognition system. This speech sample is then compared to voice templates of word from an application vocabulary. If the speech sample matches more than one of the voice templates of the words form the application vocabulary, then the user is prompted to re-train the speech recognition system using an alternate word in place of the word.
In an exemplary embodiment of the method for re-training a speech recognition system, it is first determined that the speech recognition system has poor performance before acquiring the speech sample of a word.
In another exemplary embodiment of the method for re-training a speech recognition system, the voice templates include voice templates for words that sound similar to the word.
In another exemplary embodiment of the method for re-training a speech recognition system, the speech sample includes utterances of phrases that use the word.
In some exemplary embodiments of the method for re-training a speech recognition system, the alternate word includes a word chosen from a list of suggested words, while in other embodiments the alternate word includes a set of words.
The foregoing illustrative summary, as well as other exemplary objectives and/or advantages of the invention, and the manner in which the same are accomplished, are further explained within the following detailed description and its accompanying drawings.
Voice-directed workflow systems (e.g., used in warehouses or distribution centers) may benefit from speech recognition. Speech recognition systems help workers perform tasks (e.g., picking or restocking) without the need for paper or displays. As a result, the worker's hands and eyes are free to perform a task.
In these systems, each worker uses a speech recognition system communicatively connected to a host computer running software that supervises the workflow. A task prompt for a worker may be created by the host computer and then sent wirelessly to a speech recognition system worn by a worker. The speech recognition system may then convert the text/data task prompt into speech (e.g., using a speech synthesizer) and relay the spoken task prompt to the worker via a speaker (e.g., an earphone). The worker's spoken responses may be collected via a microphone, recognized as speech, converted into data/text, and then transmitted back to the host computer wirelessly.
The audio I/O device is communicatively coupled to a computing device 7. In some possible embodiments, the audio I/O device is integrated with the computing device 7 into a headset. In others, like the embodiment shown in
The computing device 7 may be a single-purpose device, multipurpose device (e.g., barcode scanner), or may be a general purposed device like a smartphone. The computing device 7 may include variety of means for input/output (e.g., a display, buttons, touchscreen, etc.) and may have connectors 2 that enable peripheral input/output devices to be attached either temporarily or permanently.
The computing device 7 typically has some means of storage or memory (e.g., RAM, ROM, CD, DVD, hard-drive, solid state drive, etc.). Software programs and data may be stored in the memory and accessed by a processor (e.g., one or more controllers, digital signal processor (DSP), application specific integrated circuit (ASIC), programmable gate array (PGA), and/or programmable logic controller (PLC)).
The software programs stored in the memory and accessed by the processor may enable the speech recognition system to convert a digitally sampled voice waveform signal into text/data that represent the speech's intended meaning.
To accomplish speech recognition, the speech recognizer must first detect that something was spoken rather than some other sound (e.g., breath, wind, background noise, etc.). Next, the waveforms for the spoken words/phrase may be compared to a selected set of voice templates. The selected set of voice templates may be voice templates for expected words/phrase determined by the workflow dialog. For example, the response to a yes/no question is expected to be “yes” or “no.” The speech recognizer determines which word/phrase from the selected set best matches what was spoken. For example, a similarity score may be computed between the spoken word and a voice template. If this similarity score is above a threshold then the spoken word may be considered an acceptable match to the voice template.
A voice template is representative voice waveform for a particular word. An application vocabulary is a collection of voice templates representing the words in the vocabulary. These voice templates may be unique to each user (i.e., custom) or may be generic for all users. Creating a custom voice template requires training.
Training allows each worker to create custom voice templates for the words in the application vocabulary. For example, a new worker may be required to train a speech recognition system before use (i.e., enrollment training). During a training session, a word or phrase may be presented to a worker via a display (e.g., on a display temporarily attached to the computing device 7). The worker may read the word aloud several times into the microphone 4. A program running on the computing device 7 may receive the speech signals and compute a statistical average of the word to form a voice template. The voice template may then be stored in the memory as part of the application vocabulary.
Custom voice templates are used in speaker-dependent speech recognition systems, while generic voice templates are used in speaker-independent speech recognition systems. Some speech recognition systems, however, may have both generic and custom voice templates to improve accuracy for a particular user on words that may sound alike.
Re-training (i.e., update training) a voice recognition system is sometimes necessary. In some cases, a speech recognition system will have poor performance on a particular word. For example, a user may notice that the system often requires the user to repeat the word, or a user may notice that the system falsely recognizes one word as another. Here, the worker may initiate re-training in order to create a new voice template for the word. In some embodiments, the detection of poor performance and/or the re-training may be done automatically by the speech recognition system.
One cause of poor recognition performance is voice template similarity. Similar voice templates make template matching difficult. Similar voice templates are common for words that sound similar (e.g., “five” and “nine”). It is especially troublesome for words of the same class (e.g., numbers), words that may be spoken together, and/or words that are equally expected at a dialog response points. Sometimes the similarity can be corrected by better enunciation or different pronunciation of the word/phrase.
The present invention embraces methods that prevent voice template similarity from resulting during training or re-training. These methods proactively prevent workers from completing training of an application vocabulary with voice templates for words that could otherwise confuse the speech recognition system.
The method begins with the step of acquiring a speech sample 8. This speech sample is typically a spoken word but could also be a set of spoken words (i.e., phrase). The speech sample may be a word/phrase spoken once or may be a word/phrase spoken repeatedly. The word/phrase is part of an application vocabulary that includes voice templates for different words/phrases. The voice templates for word/phrases in the application vocabulary may be generic voice templates for all users or may be custom voice templates for a single user.
A voice template is created 10 for the spoken word from the speech sample. The voice template may be a file of data points representing the digital samples of the voice waveform created when the word is spoken into the microphone 4 and digitized by the computing device 7.
The voice template for the word is compared to voice templates from the application vocabulary 15. This comparison may yield a similarity score that may be used as the basis for determining if the voice template for the word is too similar to other words already in the application vocabulary. Various methods such as dynamic time warping (DTW) may be used to evaluate this similarity. For example, a similarity score may be created and compared to a threshold to determine if two words match.
The created voice template may be compared to the all words in the application vocabulary or a subset of words in the application vocabulary. For example, a subset of words may be words that sound alike or words from the same type (e.g., rhyming words) or class (e.g., numbers).
In speaker-dependent speech recognition systems, template similarity may be found if the created voice template matches the wrong word or the correct word and at least one other word's custom template. For speaker-independent speech recognition systems, template similarity may occur when the created voice template matches multiple generic voice templates or the wrong generic voice template. When similarity is found 20, then the user may be prompted to create a new template for the word in a way that is more likely to create a voice template for the word that is less similar to the other words in the application vocabulary. This prompt may be embodied as a voice message on a speaker and/or a text/graphical message on a display.
The method also includes the step of providing instructions (i.e., prompts) to a worker to help the worker create a less similar voice template for the word 30. These instructions may include a list of possible alternate words that could be used in place of the word. For example, the alternate word “fiver” might be suggested for use in place of the word “five.” In another embodiment, the instructions provided could include prompts to help a worker enunciate the word more clearly or to emphasize the word differently (e.g., emphasize the “f” in “five”). In still another embodiment, a user may create their own word or sound to represent the word. This option may be especially useful for workers that have a native language that is different from the application dialog language. For example, a worker may choose to say “cinco” for the word “five.”
The method continues when a user applies the instructions and creates a new template for the alternate word. Here, the method may repeat creating alternate voice templates until a suitable (i.e., no template similarity) is found. When a suitable alternate (i.e., substitute) word has been found, training for that word ends and the substitute word's voice template is stored in the application vocabulary 25. Form that point on, the substitute word represents the dialog word in the application vocabulary. For example the method may result in the voice template for “fiver” stored in the application library for the word “five”. At this point, other words may be trained or the training of the speech recognition system may conclude.
Sometimes re-training a speech recognition system on a word is required. A flowchart for a method for re-training a speech recognition system according to an embodiment of the present invention is shown in
A speech recognition system may periodically evaluate its performance 35. If the speech recognizer is performing poorly (e.g., on a particular word) then the re-training may be initiated automatically. In some possible embodiments, the re-training may be initiated manually by a user. This initiation of re-training may be based on a user's evaluation or perception of the system's performance or may be for other reasons.
Re-training a speech recognition system begins with acquiring a speech sample (e.g., phrases that use the word) of a word 40. The speech sample is compared to voice templates for words (e.g., words that sound similar to the word) from an application vocabulary 45. If there the speech sample matches the wrong word or matches multiple words in the application vocabulary then the user is prompted (e.g., via graphic/text on a graphical user interface display) to retrain the system using an alternate word 55, 60. In one possible embodiment, the alternate word includes words chosen from a list of suggested words. In another possible embodiment the alternate word includes a set of words (i.e., phase) to represent the word. For example, the word “five” could be replaced with the word “number five.”
In some embodiments, choosing alternate words 55, re-training 60, and comparing the alternate word to the application vocabulary 45 may continue until a suitably different voice template is created for the word. When a suitable alternate word is found, the voice template for this alternate word is inserted into the application vocabulary for the word and the re-training ends.
To supplement the present disclosure, this application incorporates entirely by reference the following commonly assigned patents, patent application publications, and patent applications:
In the specification and/or figures, typical embodiments of the invention have been disclosed. The present invention is not limited to such exemplary embodiments. The use of the term “and/or” includes any and all combinations of one or more of the associated listed items. The figures are schematic representations and so are not necessarily drawn to scale. Unless otherwise noted, specific terms have been used in a generic and descriptive sense and not for purposes of limitation.
| Number | Name | Date | Kind |
|---|---|---|---|
| 4972485 | Dautrich et al. | Nov 1990 | A |
| 4994983 | Landell | Feb 1991 | A |
| 6535850 | Bayya | Mar 2003 | B1 |
| 6832725 | Gardiner et al. | Dec 2004 | B2 |
| 7128266 | Marlton et al. | Oct 2006 | B2 |
| 7159783 | Walczyk et al. | Jan 2007 | B2 |
| 7413127 | Ehrhart et al. | Aug 2008 | B2 |
| 7726575 | Wang et al. | Jun 2010 | B2 |
| 8294969 | Plesko | Oct 2012 | B2 |
| 8317105 | Kotlarsky et al. | Nov 2012 | B2 |
| 8322622 | Suzhou et al. | Dec 2012 | B2 |
| 8366005 | Kotlarsky et al. | Feb 2013 | B2 |
| 8371507 | Haggerty et al. | Feb 2013 | B2 |
| 8376233 | Horn et al. | Feb 2013 | B2 |
| 8381979 | Franz | Feb 2013 | B2 |
| 8390909 | Plesko | Mar 2013 | B2 |
| 8408464 | Zhu et al. | Apr 2013 | B2 |
| 8408468 | Horn et al. | Apr 2013 | B2 |
| 8408469 | Good | Apr 2013 | B2 |
| 8424768 | Rueblinger et al. | Apr 2013 | B2 |
| 8448863 | Xian et al. | May 2013 | B2 |
| 8457013 | Essinger et al. | Jun 2013 | B2 |
| 8459557 | Havens et al. | Jun 2013 | B2 |
| 8469272 | Kearney | Jun 2013 | B2 |
| 8474712 | Kearney et al. | Jul 2013 | B2 |
| 8479992 | Kotlarsky et al. | Jul 2013 | B2 |
| 8490877 | Kearney | Jul 2013 | B2 |
| 8517271 | Kotlarsky et al. | Aug 2013 | B2 |
| 8523076 | Good | Sep 2013 | B2 |
| 8528818 | Ehrhart et al. | Sep 2013 | B2 |
| 8544737 | Gomez et al. | Oct 2013 | B2 |
| 8548420 | Grunow et al. | Oct 2013 | B2 |
| 8550335 | Samek et al. | Oct 2013 | B2 |
| 8550354 | Gannon et al. | Oct 2013 | B2 |
| 8550357 | Kearney | Oct 2013 | B2 |
| 8556174 | Kosecki et al. | Oct 2013 | B2 |
| 8556176 | Van Horn et al. | Oct 2013 | B2 |
| 8556177 | Hussey et al. | Oct 2013 | B2 |
| 8559767 | Barber et al. | Oct 2013 | B2 |
| 8561895 | Gomez et al. | Oct 2013 | B2 |
| 8561903 | Sauerwein | Oct 2013 | B2 |
| 8561905 | Edmonds et al. | Oct 2013 | B2 |
| 8565107 | Pease et al. | Oct 2013 | B2 |
| 8571307 | Li et al. | Oct 2013 | B2 |
| 8579200 | Samek et al. | Nov 2013 | B2 |
| 8583924 | Caballero et al. | Nov 2013 | B2 |
| 8584945 | Wang et al. | Nov 2013 | B2 |
| 8587595 | Wang | Nov 2013 | B2 |
| 8587697 | Hussey et al. | Nov 2013 | B2 |
| 8588869 | Sauerwein et al. | Nov 2013 | B2 |
| 8590789 | Nahill et al. | Nov 2013 | B2 |
| 8596539 | Havens et al. | Dec 2013 | B2 |
| 8596542 | Havens et al. | Dec 2013 | B2 |
| 8596543 | Havens et al. | Dec 2013 | B2 |
| 8599271 | Havens et al. | Dec 2013 | B2 |
| 8599957 | Peake et al. | Dec 2013 | B2 |
| 8600158 | Li et al. | Dec 2013 | B2 |
| 8600167 | Showering | Dec 2013 | B2 |
| 8602309 | Longacre et al. | Dec 2013 | B2 |
| 8608053 | Meier et al. | Dec 2013 | B2 |
| 8608071 | Liu et al. | Dec 2013 | B2 |
| 8611309 | Wang et al. | Dec 2013 | B2 |
| 8615487 | Gomez et al. | Dec 2013 | B2 |
| 8621123 | Caballero | Dec 2013 | B2 |
| 8622303 | Meier et al. | Jan 2014 | B2 |
| 8628013 | Ding | Jan 2014 | B2 |
| 8628015 | Wang et al. | Jan 2014 | B2 |
| 8628016 | Winegar | Jan 2014 | B2 |
| 8629926 | Wang | Jan 2014 | B2 |
| 8630491 | Longacre et al. | Jan 2014 | B2 |
| 8635309 | Berthiaume et al. | Jan 2014 | B2 |
| 8636200 | Kearney | Jan 2014 | B2 |
| 8636212 | Nahill et al. | Jan 2014 | B2 |
| 8636215 | Ding et al. | Jan 2014 | B2 |
| 8636224 | Wang | Jan 2014 | B2 |
| 8638806 | Wang et al. | Jan 2014 | B2 |
| 8640958 | Lu et al. | Feb 2014 | B2 |
| 8640960 | Wang et al. | Feb 2014 | B2 |
| 8643717 | Li et al. | Feb 2014 | B2 |
| 8646692 | Meier et al. | Feb 2014 | B2 |
| 8646694 | Wang et al. | Feb 2014 | B2 |
| 8657200 | Ren et al. | Feb 2014 | B2 |
| 8659397 | Vargo et al. | Feb 2014 | B2 |
| 8668149 | Good | Mar 2014 | B2 |
| 8678285 | Kearney | Mar 2014 | B2 |
| 8678286 | Smith et al. | Mar 2014 | B2 |
| 8682077 | Longacre | Mar 2014 | B1 |
| D702237 | Oberpriller et al. | Apr 2014 | S |
| 8687282 | Feng et al. | Apr 2014 | B2 |
| 8692927 | Pease et al. | Apr 2014 | B2 |
| 8695880 | Bremer et al. | Apr 2014 | B2 |
| 8698949 | Grunow et al. | Apr 2014 | B2 |
| 8702000 | Barber et al. | Apr 2014 | B2 |
| 8717494 | Gannon | May 2014 | B2 |
| 8720783 | Biss et al. | May 2014 | B2 |
| 8723804 | Fletcher et al. | May 2014 | B2 |
| 8723904 | Marty et al. | May 2014 | B2 |
| 8727223 | Wang | May 2014 | B2 |
| 8740082 | Wilz | Jun 2014 | B2 |
| 8740085 | Furlong et al. | Jun 2014 | B2 |
| 8746563 | Hennick et al. | Jun 2014 | B2 |
| 8750445 | Peake et al. | Jun 2014 | B2 |
| 8752766 | Xian et al. | Jun 2014 | B2 |
| 8756059 | Braho et al. | Jun 2014 | B2 |
| 8757495 | Qu et al. | Jun 2014 | B2 |
| 8760563 | Koziol et al. | Jun 2014 | B2 |
| 8736909 | Reed et al. | Jul 2014 | B2 |
| 8777108 | Coyle | Jul 2014 | B2 |
| 8777109 | Oberpriller et al. | Jul 2014 | B2 |
| 8779898 | Havens et al. | Jul 2014 | B2 |
| 8781520 | Payne et al. | Jul 2014 | B2 |
| 8783573 | Havens et al. | Jul 2014 | B2 |
| 8789757 | Barten | Jul 2014 | B2 |
| 8789758 | Hawley et al. | Jul 2014 | B2 |
| 8789759 | Xian et al. | Jul 2014 | B2 |
| 8794520 | Wang et al. | Aug 2014 | B2 |
| 8794522 | Ehrhart | Aug 2014 | B2 |
| 8794525 | Amundsen et al. | Aug 2014 | B2 |
| 8794526 | Wang et al. | Aug 2014 | B2 |
| 8798367 | Ellis | Aug 2014 | B2 |
| 8807431 | Wang et al. | Aug 2014 | B2 |
| 8807432 | Van Horn et al. | Aug 2014 | B2 |
| 8820630 | Qu et al. | Sep 2014 | B2 |
| 8822848 | Meagher | Sep 2014 | B2 |
| 8824692 | Sheerin et al. | Sep 2014 | B2 |
| 8824696 | Braho | Sep 2014 | B2 |
| 8842849 | Wahl et al. | Sep 2014 | B2 |
| 8844822 | Kotlarsky et al. | Sep 2014 | B2 |
| 8844823 | Fritz et al. | Sep 2014 | B2 |
| 8849019 | Li et al. | Sep 2014 | B2 |
| D716285 | Chaney et al. | Oct 2014 | S |
| 8851383 | Yeakley et al. | Oct 2014 | B2 |
| 8854633 | Laffargue | Oct 2014 | B2 |
| 8866963 | Grunow et al. | Oct 2014 | B2 |
| 8868421 | Braho et al. | Oct 2014 | B2 |
| 8868519 | Maloy et al. | Oct 2014 | B2 |
| 8868802 | Barten | Oct 2014 | B2 |
| 8868803 | Bremer et al. | Oct 2014 | B2 |
| 8870074 | Gannon | Oct 2014 | B1 |
| 8879639 | Sauerwein | Nov 2014 | B2 |
| 8880426 | Smith | Nov 2014 | B2 |
| 8881983 | Havens et al. | Nov 2014 | B2 |
| 8881987 | Wang | Nov 2014 | B2 |
| 8903172 | Smith | Dec 2014 | B2 |
| 8908995 | Benos et al. | Dec 2014 | B2 |
| 8910870 | Li et al. | Dec 2014 | B2 |
| 8910875 | Ren et al. | Dec 2014 | B2 |
| 8914290 | Hendrickson et al. | Dec 2014 | B2 |
| 8914788 | Pettinelli et al. | Dec 2014 | B2 |
| 8915439 | Feng et al. | Dec 2014 | B2 |
| 8915444 | Havens et al. | Dec 2014 | B2 |
| 8916789 | Woodburn | Dec 2014 | B2 |
| 8918250 | Hollifield | Dec 2014 | B2 |
| 8918564 | Caballero | Dec 2014 | B2 |
| 8925818 | Kosecki et al. | Jan 2015 | B2 |
| 8939374 | Jovanovski et al. | Jan 2015 | B2 |
| 8942480 | Ellis | Jan 2015 | B2 |
| 8944313 | Williams et al. | Feb 2015 | B2 |
| 8944327 | Meier et al. | Feb 2015 | B2 |
| 8944332 | Harding et al. | Feb 2015 | B2 |
| 8950678 | Germaine et al. | Feb 2015 | B2 |
| D723560 | Zhou et al. | Mar 2015 | S |
| 8967468 | Gomez et al. | Mar 2015 | B2 |
| 8971346 | Sevier | Mar 2015 | B2 |
| 8976030 | Cunningham et al. | Mar 2015 | B2 |
| 8976368 | Akel et al. | Mar 2015 | B2 |
| 8978981 | Guan | Mar 2015 | B2 |
| 8978983 | Bremer et al. | Mar 2015 | B2 |
| 8978984 | Hennick et al. | Mar 2015 | B2 |
| 8985456 | Zhu et al. | Mar 2015 | B2 |
| 8985457 | Soule et al. | Mar 2015 | B2 |
| 8985459 | Kearney et al. | Mar 2015 | B2 |
| 8985461 | Gelay et al. | Mar 2015 | B2 |
| 8988578 | Showering | Mar 2015 | B2 |
| 8988590 | Gillet et al. | Mar 2015 | B2 |
| 8991704 | Hopper et al. | Mar 2015 | B2 |
| 8996194 | Davis et al. | Mar 2015 | B2 |
| 8996384 | Funyak et al. | Mar 2015 | B2 |
| 8998091 | Edmonds et al. | Apr 2015 | B2 |
| 9002641 | Showering | Apr 2015 | B2 |
| 9007368 | Laffargue et al. | Apr 2015 | B2 |
| 9010641 | Qu et al. | Apr 2015 | B2 |
| 9015513 | Murawski et al. | Apr 2015 | B2 |
| 9016576 | Brady et al. | Apr 2015 | B2 |
| D730357 | Fitch et al. | May 2015 | S |
| 9022288 | Nahill et al. | May 2015 | B2 |
| 9030964 | Essinger et al. | May 2015 | B2 |
| 9033240 | Smith et al. | May 2015 | B2 |
| 9033242 | Gillet et al. | May 2015 | B2 |
| 9036054 | Koziol et al. | May 2015 | B2 |
| 9037344 | Chamberlin | May 2015 | B2 |
| 9038911 | Xian et al. | May 2015 | B2 |
| 9038915 | Smith | May 2015 | B2 |
| D730901 | Oberpriller et al. | Jun 2015 | S |
| D730902 | Fitch et al. | Jun 2015 | S |
| D733112 | Chaney et al. | Jun 2015 | S |
| 9047098 | Barten | Jun 2015 | B2 |
| 9047359 | Caballero et al. | Jun 2015 | B2 |
| 9047420 | Caballero | Jun 2015 | B2 |
| 9047525 | Barber | Jun 2015 | B2 |
| 9047531 | Showering et al. | Jun 2015 | B2 |
| 9049640 | Wang et al. | Jun 2015 | B2 |
| 9053055 | Caballero | Jun 2015 | B2 |
| 9053378 | Hou et al. | Jun 2015 | B1 |
| 9053380 | Xian et al. | Jun 2015 | B2 |
| 9057641 | Amundsen et al. | Jun 2015 | B2 |
| 9058526 | Powilleit | Jun 2015 | B2 |
| 9064165 | Havens et al. | Jun 2015 | B2 |
| 9064167 | Xian et al. | Jun 2015 | B2 |
| 9064168 | Todeschini et al. | Jun 2015 | B2 |
| 9064254 | Todeschini et al. | Jun 2015 | B2 |
| 9066032 | Wang | Jun 2015 | B2 |
| 9070032 | Corcoran | Jun 2015 | B2 |
| D734339 | Zhou et al. | Jul 2015 | S |
| D734751 | Oberpriller et al. | Jul 2015 | S |
| 9082023 | Feng et al. | Jul 2015 | B2 |
| 20060116877 | Pickering | Jun 2006 | A1 |
| 20070063048 | Havens et al. | Mar 2007 | A1 |
| 20090134221 | Zhu et al. | May 2009 | A1 |
| 20100177076 | Essinger et al. | Jul 2010 | A1 |
| 20100177080 | Essinger et al. | Jul 2010 | A1 |
| 20100177707 | Essinger et al. | Jul 2010 | A1 |
| 20100177749 | Essinger et al. | Jul 2010 | A1 |
| 20110169999 | Grunow et al. | Jul 2011 | A1 |
| 20110202554 | Powilleit et al. | Aug 2011 | A1 |
| 20120111946 | Golant | May 2012 | A1 |
| 20120168512 | Kotlarsky et al. | Jul 2012 | A1 |
| 20120193423 | Samek | Aug 2012 | A1 |
| 20120203647 | Smith | Aug 2012 | A1 |
| 20120223141 | Good et al. | Sep 2012 | A1 |
| 20130043312 | Van Horn | Feb 2013 | A1 |
| 20130075168 | Amundsen et al. | Mar 2013 | A1 |
| 20130090921 | Liu | Apr 2013 | A1 |
| 20130175341 | Kearney et al. | Jul 2013 | A1 |
| 20130175343 | Good | Jul 2013 | A1 |
| 20130257744 | Daghigh et al. | Oct 2013 | A1 |
| 20130257759 | Daghigh | Oct 2013 | A1 |
| 20130262117 | Heckmann | Oct 2013 | A1 |
| 20130270346 | Xian et al. | Oct 2013 | A1 |
| 20130287258 | Kearney | Oct 2013 | A1 |
| 20130292475 | Kotlarsky et al. | Nov 2013 | A1 |
| 20130292477 | Hennick et al. | Nov 2013 | A1 |
| 20130293539 | Hunt et al. | Nov 2013 | A1 |
| 20130293540 | Laffargue et al. | Nov 2013 | A1 |
| 20130306728 | Thuries et al. | Nov 2013 | A1 |
| 20130306731 | Pedraro | Nov 2013 | A1 |
| 20130307964 | Bremer et al. | Nov 2013 | A1 |
| 20130308625 | Corcoran | Nov 2013 | A1 |
| 20130313324 | Koziol et al. | Nov 2013 | A1 |
| 20130313325 | Wilz et al. | Nov 2013 | A1 |
| 20130342717 | Havens et al. | Dec 2013 | A1 |
| 20140001267 | Giordano et al. | Jan 2014 | A1 |
| 20140002828 | Laffargue et al. | Jan 2014 | A1 |
| 20140008439 | Wang | Jan 2014 | A1 |
| 20140025584 | Liu et al. | Jan 2014 | A1 |
| 20140034734 | Sauerwein | Feb 2014 | A1 |
| 20140036848 | Pease et al. | Feb 2014 | A1 |
| 20140039693 | Havens et al. | Feb 2014 | A1 |
| 20140042814 | Kather et al. | Feb 2014 | A1 |
| 20140049120 | Kohtz et al. | Feb 2014 | A1 |
| 20140049635 | Laffargue et al. | Feb 2014 | A1 |
| 20140061306 | Wu et al. | Mar 2014 | A1 |
| 20140063289 | Hussey et al. | Mar 2014 | A1 |
| 20140066136 | Sauerwein et al. | Mar 2014 | A1 |
| 20140067692 | Ye et al. | Mar 2014 | A1 |
| 20140070005 | Nahill et al. | Mar 2014 | A1 |
| 20140071840 | Venancio | Mar 2014 | A1 |
| 20140074746 | Wang | Mar 2014 | A1 |
| 20140076974 | Havens et al. | Mar 2014 | A1 |
| 20140078341 | Havens et al. | Mar 2014 | A1 |
| 20140078342 | Li et al. | Mar 2014 | A1 |
| 20140078345 | Showering | Mar 2014 | A1 |
| 20140098792 | Wang et al. | Apr 2014 | A1 |
| 20140100774 | Showering | Apr 2014 | A1 |
| 20140100813 | Showering | Apr 2014 | A1 |
| 20140103115 | Meier et al. | Apr 2014 | A1 |
| 20140104413 | McCloskey et al. | Apr 2014 | A1 |
| 20140104414 | McCloskey et al. | Apr 2014 | A1 |
| 20140104416 | Li et al. | Apr 2014 | A1 |
| 20140104451 | Todeschini et al. | Apr 2014 | A1 |
| 20140106594 | Skvoretz | Apr 2014 | A1 |
| 20140106725 | Sauerwein | Apr 2014 | A1 |
| 20140108010 | Maltseff et al. | Apr 2014 | A1 |
| 20140108402 | Gomez et al. | Apr 2014 | A1 |
| 20140108682 | Caballero | Apr 2014 | A1 |
| 20140110485 | Toa et al. | Apr 2014 | A1 |
| 20140114530 | Fitch et al. | Apr 2014 | A1 |
| 20140121438 | Kearney | May 2014 | A1 |
| 20140121445 | Ding et al. | May 2014 | A1 |
| 20140124577 | Wang et al. | May 2014 | A1 |
| 20140124579 | Ding | May 2014 | A1 |
| 20140125842 | Winegar | May 2014 | A1 |
| 20140125853 | Wang | May 2014 | A1 |
| 20140125999 | Longacre et al. | May 2014 | A1 |
| 20140129378 | Richardson | May 2014 | A1 |
| 20140131441 | Nahill et al. | May 2014 | A1 |
| 20140131443 | Smith | May 2014 | A1 |
| 20140131444 | Wang | May 2014 | A1 |
| 20140131448 | Xian et al. | May 2014 | A1 |
| 20140133379 | Wang et al. | May 2014 | A1 |
| 20140136208 | Maltseff et al. | May 2014 | A1 |
| 20140140585 | Wang | May 2014 | A1 |
| 20140151453 | Meier et al. | Jun 2014 | A1 |
| 20140152882 | Samek et al. | Jun 2014 | A1 |
| 20140158770 | Sevier et al. | Jun 2014 | A1 |
| 20140159869 | Zumsteg et al. | Jun 2014 | A1 |
| 20140166755 | Liu et al. | Jun 2014 | A1 |
| 20140166757 | Smith | Jun 2014 | A1 |
| 20140166759 | Liu et al. | Jun 2014 | A1 |
| 20140168787 | Wang et al. | Jun 2014 | A1 |
| 20140175165 | Havens et al. | Jun 2014 | A1 |
| 20140175172 | Jovanovski et al. | Jun 2014 | A1 |
| 20140191644 | Chaney | Jul 2014 | A1 |
| 20140191913 | Ge et al. | Jul 2014 | A1 |
| 20140197238 | Lui et al. | Jul 2014 | A1 |
| 20140197239 | Havens et al. | Jul 2014 | A1 |
| 20140197304 | Feng et al. | Jul 2014 | A1 |
| 20140203087 | Smith et al. | Jul 2014 | A1 |
| 20140204268 | Grunow et al. | Jul 2014 | A1 |
| 20140214631 | Hansen | Jul 2014 | A1 |
| 20140217166 | Berthiaume et al. | Aug 2014 | A1 |
| 20140217180 | Liu | Aug 2014 | A1 |
| 20140231500 | Ehrhart et al. | Aug 2014 | A1 |
| 20140232930 | Anderson | Aug 2014 | A1 |
| 20140247315 | Marty et al. | Sep 2014 | A1 |
| 20140263493 | Amurgis et al. | Sep 2014 | A1 |
| 20140263645 | Smith et al. | Sep 2014 | A1 |
| 20140270196 | Braho et al. | Sep 2014 | A1 |
| 20140270229 | Braho | Sep 2014 | A1 |
| 20140278387 | DiGregorio | Sep 2014 | A1 |
| 20140282210 | Bianconi | Sep 2014 | A1 |
| 20140284384 | Lu et al. | Sep 2014 | A1 |
| 20140288933 | Braho et al. | Sep 2014 | A1 |
| 20140297058 | Barker et al. | Oct 2014 | A1 |
| 20140299665 | Barber et al. | Oct 2014 | A1 |
| 20140312121 | Lu et al. | Oct 2014 | A1 |
| 20140319220 | Coyle | Oct 2014 | A1 |
| 20140319221 | Oberpriller et al. | Oct 2014 | A1 |
| 20140326787 | Barten | Nov 2014 | A1 |
| 20140332590 | Wang et al. | Nov 2014 | A1 |
| 20140344943 | Todeschini et al. | Nov 2014 | A1 |
| 20140346233 | Liu et al. | Nov 2014 | A1 |
| 20140351317 | Smith et al. | Nov 2014 | A1 |
| 20140353373 | Van Horn et al. | Dec 2014 | A1 |
| 20140361073 | Qu et al. | Dec 2014 | A1 |
| 20140361082 | Xian et al. | Dec 2014 | A1 |
| 20140362184 | Jovanovski et al. | Dec 2014 | A1 |
| 20140363015 | Braho | Dec 2014 | A1 |
| 20140369511 | Sheerin et al. | Dec 2014 | A1 |
| 20140374483 | Lu | Dec 2014 | A1 |
| 20140374485 | Xian et al. | Dec 2014 | A1 |
| 20150001301 | Ouyang | Jan 2015 | A1 |
| 20150001304 | Todeschini | Jan 2015 | A1 |
| 20150003673 | Fletcher | Jan 2015 | A1 |
| 20150009338 | Laffargue et al. | Jan 2015 | A1 |
| 20150009610 | London et al. | Jan 2015 | A1 |
| 20150014416 | Kotlarsky et al. | Jan 2015 | A1 |
| 20150021397 | Rueblinger et al. | Jan 2015 | A1 |
| 20150028102 | Ren et al. | Jan 2015 | A1 |
| 20150028103 | Jiang | Jan 2015 | A1 |
| 20150028104 | Ma et al. | Jan 2015 | A1 |
| 20150029002 | Yeakley et al. | Jan 2015 | A1 |
| 20150032709 | Maloy et al. | Jan 2015 | A1 |
| 20150039309 | Braho et al. | Feb 2015 | A1 |
| 20150040378 | Saber et al. | Feb 2015 | A1 |
| 20150048168 | Fritz et al. | Feb 2015 | A1 |
| 20150049347 | Laffargue et al. | Feb 2015 | A1 |
| 20150051992 | Smith | Feb 2015 | A1 |
| 20150053766 | Havens et al. | Feb 2015 | A1 |
| 20150053768 | Wang et al. | Feb 2015 | A1 |
| 20150053769 | Thuries et al. | Feb 2015 | A1 |
| 20150062366 | Liu et al. | Mar 2015 | A1 |
| 20150063215 | Wang | Mar 2015 | A1 |
| 20150063676 | Lloyd et al. | Mar 2015 | A1 |
| 20150069130 | Gannon | Mar 2015 | A1 |
| 20150071818 | Todeschini | Mar 2015 | A1 |
| 20150083800 | Li et al. | Mar 2015 | A1 |
| 20150086114 | Todeschini | Mar 2015 | A1 |
| 20150088522 | Hendrickson et al. | Mar 2015 | A1 |
| 20150096872 | Woodburn | Apr 2015 | A1 |
| 20150099557 | Pettinelli et al. | Apr 2015 | A1 |
| 20150100196 | Hollifield | Apr 2015 | A1 |
| 20150102109 | Huck | Apr 2015 | A1 |
| 20150115035 | Meier et al. | Apr 2015 | A1 |
| 20150127791 | Kosecki et al. | May 2015 | A1 |
| 20150128116 | Chen et al. | May 2015 | A1 |
| 20150129659 | Feng et al. | May 2015 | A1 |
| 20150133047 | Smith et al. | May 2015 | A1 |
| 20150134470 | Hejl et al. | May 2015 | A1 |
| 20150136851 | Harding et al. | May 2015 | A1 |
| 20150136854 | Lu et al. | May 2015 | A1 |
| 20150142492 | Kumar | May 2015 | A1 |
| 20150144692 | Hejl | May 2015 | A1 |
| 20150144698 | Teng et al. | May 2015 | A1 |
| 20150144701 | Xian et al. | May 2015 | A1 |
| 20150149946 | Benos et al. | May 2015 | A1 |
| 20150161429 | Xian | Jun 2015 | A1 |
| 20150169925 | Chang et al. | Jun 2015 | A1 |
| 20150169929 | Williams et al. | Jun 2015 | A1 |
| 20150186703 | Chen et al. | Jul 2015 | A1 |
| 20150193644 | Kearney et al. | Jul 2015 | A1 |
| 20150193645 | Colavito et al. | Jul 2015 | A1 |
| 20150199957 | Funyak et al. | Jul 2015 | A1 |
| 20150204671 | Showering | Jul 2015 | A1 |
| Number | Date | Country |
|---|---|---|
| 1079370 | Feb 2001 | EP |
| 2013163789 | Nov 2013 | WO |
| 2013173985 | Nov 2013 | WO |
| 2014019130 | Feb 2014 | WO |
| 2014110495 | Jul 2014 | WO |
| Entry |
|---|
| U.S. Appl. No. 14/519,179 for Dimensioning System With Multipath Interference Mitigation filed Oct. 21, 2014 (Thuries et al.); 30 pages. |
| U.S. Appl. No. 14/264,173 for Autofocus Lens System for Indicia Readers filed Apr. 29, 2014, (Ackley et al.); 39 pages. |
| U.S. Appl. No. . 14/453,019 for Dimensioning System With Guided Alignment, filed Aug. 6, 2014 (Li et al.); 31 pages. |
| U.S. Appl. No. 14/452,697 for Interactive Indicia Reader , filed Aug. 6, 2014, (Todeschini); 32 pages. |
| U.S. Appl. No. 14/231,898 for Hand-Mounted Indicia-Reading Device with Finger Motion Triggering filed Apr. 1, 2014 (Van Horn et al.); 36 pages. |
| U.S. Appl. No. 14/715,916 for Evaluating Image Values filed May 19, 2015 (Ackley); 60 pages. |
| U.S. Appl. No. 14/513,808 for Identifying Inventory Items in a Storage Facility filed Oct. 14, 2014 (Singel et al.); 51 pages. |
| U.S. Appl. No. 29/458,405 for an Electronic Device, filed Jun. 19, 2013 (Fitch et al.); 22 pages. |
| U.S. Appl. No. 29/459,620 for an Electronic Device Enclosure, filed Jul. 2, 2013 (London et al.); 21 pages. |
| U.S. Appl. No. 14/483,056 for Variable Depth of Field Barcode Scanner filed Sep. 10, 2014 (McCloskey et al.); 29 pages. |
| U.S. Appl. No. 14/531,154 for Directing an Inspector Through an Inspection filed Nov. 3, 2014 (Miller et al.); 53 pages. |
| U.S. Appl. No. 29/525,068 for Tablet Computer With Removable Scanning Device filed Apr. 27, 2015 (Schulte et al.); 19 pages. |
| U.S. Appl. No. 29/468,118 for an Electronic Device Case, filed Sep. 26, 2013 (Oberpriller et al.); 44 pages. |
| U.S. Appl. No. 14/340,627 for an Axially Reinforced Flexible Scan Element, filed Jul. 25, 2014 (Reublinger et al.); 41 pages. |
| U.S. Appl. No. 14/676,327 for Device Management Proxy for Secure Devices filed Apr. 1, 2015 (Yeakley et al.); 50 pages. |
| U.S. Appl. No. 14/257,364 for Docking System and Method Using Near Field Communication filed Apr. 21, 2014 (Showering); 31 pages. |
| U.S. Appl. No. 14/327,827 for a Mobile-Phone Adapter for Electronic Transactions, filed Jul. 10, 2014 (Hejl); 25 pages. |
| U.S. Appl. No. 14/334,934 for a System and Method for Indicia Verification, filed Jul. 18, 2014 (Hejl); 38 pages. |
| U.S. Appl. No. 29/530,600 for Cyclone filed Jun. 18, 2015 (Vargo et al); 16 pages. |
| U.S. Appl. No. 14/707,123 for Application Independent DEX/UCS Interface filed May 8, 2015 (Pape); 47 pages. |
| U.S. Appl. No. 14/283,282 for Terminal Having Illumination and Focus Control filed May 21, 2014 (Liu et al.); 31 pages. |
| U.S. Appl. No. 14/619,093 for Methods for Training a Speech Recognition System filed Feb. 11, 2015 (Pecorari); 35 pages. |
| U.S. Appl. No. 29/524,186 for Scanner filed Apr. 17, 2015 (Zhou et al.); 17 pages. |
| U.S. Appl. No. 14/705,407 for Method and System to Protect Software-Based Network-Connected Devices From Advanced Persistent Threat filed May 6, 2015 (Hussey et al.); 42 pages. |
| U.S. Appl. No. 14/614,706 for Device for Supporting an Electronic Tool on a User's Hand filed Feb. 5, 2015 (Oberpriller et al.); 33 pages. |
| U.S. Appl. No. 14/628,708 for Device, System, and Method for Determining the Status of Checkout Lanes filed Feb. 23, 2015 (Todeschini); 37 pages. |
| U.S. Appl. No. 14/704,050 for Intermediate Linear Positioning filed May 5, 2015 (Charpentier et al.); 60 pages. |
| U.S. Appl. No. 14/529,563 for Adaptable Interface for a Mobile Computing Device filed Oct. 31, 2014 (Schoon et al.); 36 pages. |
| U.S. Appl. No. 14/705,012 for Hands-Free Human Machine Interface Responsive to a Driver of a Vehicle filed May 6, 2015 (Fitch et al.); 44 pages. |
| U.S. Appl. No. 14/715,672 for Augumented Reality Enabled Hazard Display filed May 19, 2015 (Venkatesha et al.); 35 pages. |
| U.S. Appl. No. 14/695,364 for Medication Management System filed Apr. 24, 2015 (Sewell et al.); 44 pages. |
| U.S. Appl. No. 14/664,063 for Method and Application for Scanning a Barcode With a Smart Device While Continuously Running and Displaying an Application on the Smart Device Display filed Mar. 20, 2015 (Todeschini); 37 pages. |
| U.S. Appl. No. 14/735,717 for Indicia-Reading Systems Having an Interface With a User's Nervous System filed Jun. 10, 2015 (Todeschini); 39 pages. |
| U.S. Appl. No. 14/527,191 for Method and System for Recognizing Speech Using Wildcards in an Expected Response filed Oct. 29, 2014 (Braho et al.); 45 pages. |
| U.S. Appl. No. 14/702,110 for System and Method for Regulating Barcode Data Injection Into a Running Application on a Smart Device filed May 1, 2015 (Todeschini et al.); 38 pages. |
| U.S. Appl. No. 14/535,764 for Concatenated Expected Responses for Speech Recognition filed Nov. 7, 2014 (Braho et al.); 51 pages. |
| U.S. Appl. No. 14/687,289 for System for Communication via a Peripheral Hub filed Apr. 15, 2015 (Kohtz et al.); 37 pages. |
| U.S. Appl. No. 14/747,197 for Optical Pattern Projector filed Jun. 23, 2015 (Thuries et al.); 33 pages. |
| U.S. Appl. No. 14/674,329 for Aimer for Barcode Scanning filed Mar. 31, 2015 (Bidwell); 36 pages. |
| U.S. Appl. No. 14/702,979 for Tracking Battery Conditions filed May 4, 2015 (Young et al.); 70 pages. |
| U.S. Appl. No. 29/529,441 for Indicia Reading Device filed Jun. 8, 2015 (Zhou et al.); 14 pages. |
| U.S. Appl. No. 14/747,490 for Dual-Projector Three-Dimensional Scanner filed Jun. 23, 2015 (Jovanovski et al.); 40 pages. |
| U.S. Appl. No. 14/740,320 for Tactile Switch for a Mobile Electronic Device filed Jun. 16, 2015 (Barndringa); 38 pages. |
| U.S. Appl. No. 14/695,923 for Secure Unattended Network Authentication filed Apr. 24, 2015 (Kubler et al.); 52 pages. |
| U.S. Appl. No. 14/740,373 for Calibrating a Volume Dimensioner filed Jun. 16, 2015 (Ackley et al.); 63 pages. |
| U.S. Appl. No. 13/367,978, filed Feb. 7, 2012, (Feng et al.); now abandoned. |
| U.S. Appl. No. 14/462,801 for Mobile Computing Device With Data Cognition Software, filed Aug. 19, 2014 (Todeschini et al.); 38 pages. |
| U.S. Appl. No. 14/596,757 for System and Method for Detecting Barcode Printing Errors filed Jan. 14, 2015 (Ackley); 41 pages. |
| U.S. Appl. No. 14/277,337 for Multipurpose Optical Reader, filed May 14, 2014 (Jovanovski et al.); 59 pages. |
| U.S. Appl. No. 14/200,405 for Indicia Reader for Size-Limited Applications filed Mar. 7, 2014 (Feng et al.); 42 pages. |
| U.S. Appl. No. 14/662,922 for Multifunction Point of Sale System filed Mar. 19, 2015 (Van Horn et al.); 41 pages. |
| U.S. Appl. No. 14/446,391 for Multifunction Point of Sale Apparatus With Optical Signature Capture filed Jul. 30, 2014 (Good et al.); 37 pages. |
| U.S. Appl. No. 29/528,165 for In-Counter Barcode Scanner filed May 27, 2015 (Oberpriller et al.); 13 pages. |
| U.S. Appl. No. 29/528,890 for Mobile Computer Housing filed Jun. 2, 2015 (Fitch et al.); 61 pages. |
| U.S. Appl. No. 14/614,796 for Cargo Apportionment Techniques filed Feb. 5, 2015 (Morton et al.); 56 pages. |
| U.S. Appl. No. 29/516,892 for Table Computer filed Feb. 6, 2015 (Bidwell et al.); 13 pages. |
| U.S. Appl. No. 29/523,098 for Handle for a Tablet Computer filed Apr. 7, 2015 (Bidwell et al.); 17 pages. |
| U.S. Appl. No. 14/578,627 for Safety System and Method filed Dec. 22, 2014 (Ackley et al.); 32 pages. |
| U.S. Appl. No. 14/573,022 for Dynamic Diagnostic Indicator Generation filed Dec. 17, 2014 (Goldsmith); 43 pages. |
| U.S. Appl. No. 14/529,857 for Barcode Reader With Security Features filed Oct. 31, 2014 (Todeschini et al.); 32 pages. |
| U.S. Appl. No. 14/519,195 for Handheld Dimensioning System With Feedback filed Oct. 21, 2014 (Laffargue et al.); 39 pages. |
| U.S. Appl. No. 14/519,211 for System and Method for Dimensioning filed Oct. 21, 2014 (Ackley et al.); 33 pages. |
| U.S. Appl. No. 14/519,233 for Handheld Dimensioner With Data-Quality Indication filed Oct. 21, 2014 (Laffargue et al.); 36 pages. |
| U.S. Appl. No. 14/533,319 for Barcode Scanning System Using Wearable Device With Embedded Camera filed Nov. 5, 2014 (Todeschini); 29 pages. |
| U.S. Appl. No. 14/748,446 for Cordless Indicia Reader With a Multifunction Coil for Wireless Charging and EAS Deactivation, filed Jun. 24, 2015 (Xie et al.); 34 pages. |
| U.S. Appl. No. 29/528,590 for Electronic Device filed May 29, 2015 (Fitch et al.); 9 pages. |
| U.S. Appl. No. 14/519,249 for Handheld Dimensioning System With Measurement-Conformance Feedback filed Oct. 21, 2014 (Ackley et al.); 36 pages. |
| U.S. Appl. No. 29/519,017 for Scanner filed Mar. 2, 2015 (Zhou et al.); 11 pages. |
| U.S. Appl. No. 14/398,542 for Portable Electronic Devices Having a Separate Location Trigger Unit for Use in Controlling an Application Unit filed Nov. 3, 2014 (Bian et al.); 22 pages. |
| U.S. Appl. No. 14/405,278 for Design Pattern for Secure Store filed Mar. 9, 2015 (Zhu et al.); 23 pages. |
| U.S. Appl. No. 14/590,024 for Shelving and Package Locating Systems for Delivery Vehicles filed Jan. 6, 2015 (Payne); 31 pages. |
| U.S. Appl. No. 14/568,305 for Auto-Contrast Viewfinder for an Indicia Reader filed Dec. 12, 2014 (Todeschini); 29 pages. |
| U.S. Appl. No. 29/526,918 for Charging Base filed May 14, 2015 (Fitch et al.); 10 pages. |
| U.S. Appl. No. 14/580,262 for Media Gate for Thermal Transfer Printers filed Dec. 23, 2014 (Bowles); 36 pages. |
| European Search Report in related EP Application No. 16154356, dated Mar. 30, 2016, 6 pages. |
| Number | Date | Country | |
|---|---|---|---|
| 20160232891 A1 | Aug 2016 | US |