Auto-complete methods for spoken complete value entries

Information

  • Patent Grant
  • 10529335
  • Patent Number
    10,529,335
  • Date Filed
    Tuesday, April 9, 2019
    5 years ago
  • Date Issued
    Tuesday, January 7, 2020
    4 years ago
Abstract
An auto-complete method for a spoken complete value entry is provided. A processor receives a possible complete value entry having a unique subset, prompts a user to speak the spoken complete value entry, receives a spoken subset of the spoken complete value entry, compares the spoken subset with the unique subset of the possible complete value entry, and automatically completes the spoken complete value entry to match the possible complete value entry if the unique subset matches the spoken subset. The spoken subset has a predetermined minimum number of characters.
Description
FIELD OF THE INVENTION

The present invention relates to auto-complete methods, and more particularly, to auto-complete methods for spoken complete value entries.


BACKGROUND

Voice-enabled systems help users complete assigned tasks. For example, in a workflow process, a voice-enabled system may guide users through a particular task. The task may be at least a portion of the workflow process comprising at least one workflow stage. As a user completes his/her assigned tasks, a bi-directional dialog or communication stream of information is provided over a wireless network between the user wearing a mobile computing device (herein, “mobile device”) and a central computer system that is directing multiple users and verifying completion of their tasks. To direct the user's actions, information received by the mobile device from the central computer system is translated into speech or voice instructions for the corresponding user. To receive the voice instructions and transmit information, the user wears a communications headset (also referred to herein as a “headset assembly” or simply a “headset”) communicatively coupled to the mobile device.


The user may be prompted for a verbal response during completion of the task. The verbal response may be a string of characters, such as digits and/or letters. The string of characters may correspond, for example, to a credit card number, a telephone number, a serial number, a vehicle identification number, or the like. The spoken string of characters (i.e., the verbal response) may be referred to herein as a “spoken complete value entry”.


Unfortunately, speaking the spoken complete value entry is often time-consuming and difficult to speak correctly, especially if the spoken complete value entry is long (i.e., the number of characters in the spoken string of characters is relatively large (referred to herein as a “spoken long value entry”). If errors in speaking the spoken complete value entry are made by the user (i.e., the speaker), conventional systems and methods often require that the user restart the spoken string, causing user frustration and being even more time-consuming.


Therefore, a need exists for auto-complete methods for spoken complete value entries, particularly spoken long value entries, and for use in a workflow process.


SUMMARY

An auto-complete method for a spoken complete value entry is provided, according to various embodiments of the present invention. A processor receives a possible complete value entry having a unique subset, prompts a user to speak the spoken complete value entry, receives a spoken subset of the spoken complete value entry, compares the spoken subset with the unique subset of the possible complete value entry, and automatically completes the spoken complete value entry to match the possible complete value entry if the unique subset matches the spoken subset. The spoken subset has a predetermined minimum number of characters.


An auto-complete method for a spoken complete value entry is provided, according to various embodiments of the present invention. A processor receives one or more possible complete value entries each having a unique subset, prompts a user to speak the spoken complete value entry, receives a spoken subset of the spoken complete value entry, compares the spoken subset with the unique subset of each possible complete value entry as choices, automatically completes the spoken complete value entry to match a possible complete value entry of the one or more possible complete value entries if the spoken subset matches the unique subset of the possible complete value entry, and confirms the automatically completed spoken complete value entry as the spoken complete value entry. The spoken subset has a predetermined minimum number of characters.


An auto-complete method for a spoken complete value entry in a workflow process, according to various embodiments of the present invention. The method comprises receiving, by a processor, a voice assignment to perform the workflow process comprising at least one workflow stage. The processor identifies a task that is to be performed by a user, the task being at least a portion of the workflow process. The processor receives a possible complete value entry having a unique subset and prompts a user to speak the spoken complete value entry. The processor receives a spoken subset of the spoken complete value entry. The spoken subset has a predetermined minimum number of characters. The processor compares the spoken subset with the unique subset of the possible complete value entry and confirms the automatically completed spoken complete value entry as the spoken complete value entry.


The foregoing illustrative summary, as well as other exemplary objectives and/or advantages of the present invention, and the manner in which the same are accomplished, are further explained within the following detailed description and its accompanying drawings.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 is a simplified block diagram of a system in which an auto-complete method for spoken complete value entries may be implemented, according to various embodiments;



FIG. 2 is a diagrammatic illustration of hardware and software components of an exemplary server of the system of FIG. 1, according to various embodiments;



FIG. 3 is an illustration of the mobile computing system of the system of FIG. 1, depicting a mobile computing device and an exemplary headset that may be worn by a user performing a task in a workflow process, according to various embodiments;



FIG. 4 is a diagrammatic illustration of hardware and software components of the mobile computing device and the headset of FIG. 3, according to various embodiments; and



FIG. 5 is a flow diagram of an auto-complete method for spoken complete value entries, according to various embodiments.





DETAILED DESCRIPTION

Various embodiments are directed to auto-complete methods for spoken complete value entries. According to various embodiments, the spoken complete value entry that a user intends to speak is predicted after only a predetermined minimum number of characters (a spoken subset) have been spoken by the user. The spoken complete value entry is longer and harder to speak than the spoken subset thereof. Various embodiments speed up human-computer interactions. Various embodiments as described herein are especially useful for spoken long value entries as hereinafter described, and for use in workflow processes, thereby improving workflow efficiencies and easing worker frustration.


As used herein, the “spoken complete value entry” comprises a string of characters. As used herein, the term “string” is any finite sequence of characters (i.e., letters, numerals, symbols and punctuation marks). Each string has a length, which is the number of characters in the string. The length can be any natural number (any positive integer, but excluding zero). For numerals, valid entry values may be 0-9. For letters, valid entry values may be A-Z. Of course, in languages other than English, valid entry values may be different. The number of characters in a spoken complete value entry is greater than the number of characters in the spoken subset thereof as hereinafter described. As noted previously, the number of characters in the string of characters of the spoken complete value entry comprising a spoken long value entry is relatively large. Exemplary spoken complete value entries may be credit card numbers, telephone numbers, serial numbers, vehicle identification numbers, or the like.


Referring now to FIG. 1, according to various embodiments, an exemplary system 10 is provided in which an auto-complete method 100 for a spoken complete value entry may be implemented. The exemplary depicted system comprises a server 12 and a mobile computing system 16 that are configured to communicate through at least one communications network 18. The communications network 18 may include any collection of computers or communication devices interconnected by communication channels. The communication channels may be wired or wireless. Examples of such communication networks 18 include, without limitation, local area networks (LAN), the internet, and cellular networks.



FIG. 2 is a diagrammatic illustration of the hardware and software components of the server 12 of system 10 according to various embodiments of the present invention. The server 12 may be a computing system, such as a computer, computing device, disk array, or programmable device, including a handheld computing device, a networked device (including a computer in a cluster configuration), a mobile telecommunications device, a video game console (or other gaming system), etc. As such, the server 12 may operate as a multi-user computer or a single-user computer. The server 12 includes at least one central processing unit (CPU) 30 coupled to a memory 32. Each CPU 30 is typically implemented in hardware using circuit logic disposed on one or more physical integrated circuit devices or chips and may be one or more microprocessors, micro-controllers, FPGAs, or ASICs. Memory 32 may include RAM, DRAM, SRAM, flash memory, and/or another digital storage medium, and also typically implemented using circuit logic disposed on one or more physical integrated circuit devices, or chips. As such, memory 32 may be considered to include memory storage physically located elsewhere in the server 12, e.g., any cache memory in the at least one CPU 30, as well as any storage capacity used as a virtual memory, e.g., as stored on a mass storage device 34, another computing system (not shown), a network storage device (e.g., a tape drive) (not shown), or another network device (not shown) coupled to the server 12 through at least one network interface 36 (illustrated and referred to hereinafter as “network I/F” 36) by way of the communications network 18.


The server 12 may optionally (as indicated by dotted lines in FIG. 2) be coupled to at least one peripheral device through an input/output device interface 38 (illustrated as, and hereinafter, “I/O I/F” 38). In particular, the server 12 may receive data from a user through at least one user interface 40 (including, for example, a keyboard, mouse, a microphone, and/or other user interface) and/or outputs data to the user through at least one output device 42 (including, for example, a display, speakers, a printer, and/or another output device). Moreover, in various embodiments, the I/O I/F communicates with a device that is operative as a user interface 40 and output device 42 in combination, such as a touch screen display (not shown).


The server 12 is typically under the control of an operating system 44 and executes or otherwise relies upon various computer software applications, sequences of operations, components, programs, files, objects, modules, etc., according to various embodiments of the present invention. In various embodiments, the server 12 executes or otherwise relies on one or more business logic applications 46 that are configured to provide a task message/task instruction to the mobile computing system 16. The task message/task instruction is communicated to the mobile computing system 16 for a user thereof (such as a warehouse worker) to, for example, execute a task in at least one workflow stage of a workflow process.


Referring now to FIG. 3, according to various embodiments, the mobile computing system comprises a mobile computing device communicatively coupled to a headset. The mobile computing device may comprise a portable and/or wearable mobile computing device 70 worn by a user 76, for example, such as on a belt 78 as illustrated in the depicted embodiment of FIG. 4. In various embodiments, the mobile computing device may be carried or otherwise transported, on a vehicle 74 (FIG. 4) used in the workflow process.


According to various embodiments, FIG. 3 is a diagrammatic illustration of at least a portion of the components of the mobile computing device 70 according to various embodiments. The mobile computing device 70 comprises a memory 92 and a program code resident in the memory 92 and a processor 90 communicatively coupled to the memory 92. The mobile computing device 70 further comprises a power supply 98, such as a battery, rechargeable battery, rectifier, and/or another power source and may comprise a power monitor 75.


The processor 90 of the mobile computing device 70 is typically implemented in hardware using circuit logic disposed in one or more physical integrated circuit devices, or chips. Each processor may be one or more microprocessors, micro-controllers, field programmable gate arrays, or ASICs, while memory may include RAM, DRAM, SRAM, flash memory, and/or another digital storage medium, and that is also typically implemented using circuit logic disposed in one or more physical integrated circuit devices, or chips. As such, memory is considered to include memory storage physically located elsewhere in the mobile computing device, e.g., any cache memory in the at least one processor, as well as any storage capacity used as a virtual memory, e.g., as stored on a mass storage device, a computer, and/or or another device coupled to the mobile computing device, including coupled to the mobile computing device through at least one network I/F 94 by way of the communications network 18. The mobile computing device 70, in turn, couples to the communications network 18 through the network I/F 94 with at least one wired and/or wireless connection.


Still referring to FIGS. 3 and 4, according to various embodiments, the mobile computing system 16 may further comprise a user input/output device, such as the headset 72. The headset 72 may be used, for example, in voice-enabled workflow processes. In various embodiments, the user 76 may interface with the mobile computing device 70 (and the mobile computing device interfaces with the user 76) through the headset 72, which may be coupled to the mobile computing device 70 through a cord 80. In various embodiments, the headset 72 is a wireless headset and coupled to the mobile computing device through a wireless signal (not shown). The headset 72 may include one or more speakers 82 and one or more microphones 84. The speaker 82 is configured to play audio (e.g., such as speech output associated with a voice dialog to instruct the user 76 to perform a task, i.e., a “voice assignment”), while the microphone 84 is configured to capture speech input from the user 76 (e.g., such as for conversion to machine readable input). The speech input from the user 76 may comprise a verbal response comprising the spoken complete value entry. As such, and in some embodiments, the user 76 interfaces with the mobile computing device 70 hands-free through the headset 72. The mobile computing device 70 is configured to communicate with the headset 72 through a headset interface 102 (illustrated as, and hereinafter, “headset I/F” 102), which is in turn configured to couple to the headset 72 through the cord 80 and/or wirelessly. For example, the mobile computing device 70 may be coupled to the headset 72 through the BlueTooth® open wireless technology standard that is known in the art.


Referring now specifically to FIG. 3, in various embodiments, the mobile computing device 70 may additionally include at least one input/output interface 96 (illustrated as, and hereinafter, “I/O I/F” 96) configured to communicate with at least one peripheral 113 other than the headset 72. Exemplary peripherals may include a printer, a headset, an image scanner, an identification code reader (e.g., a barcode reader or an RFID reader), a monitor, a user interface (e.g., keyboard, keypad), an output device, a touch screen, to name a few. In various embodiments, the I/O I/F 96 includes at least one peripheral interface, including at least one of one or more serial, universal serial bus (USE), PC Card, VGA, HDMI, DVI, and/or other interfaces (e.g., for example, other computer, communicative, data, audio, and/or visual interfaces) (none shown). In various embodiments, the mobile computing device 70 may be communicatively coupled to the peripheral(s) 110 through a wired or wireless connection such as the BlueTooth® open wireless technology standard that is known in the art.


The mobile computing device 70 may be under the control and/or otherwise rely upon various software applications, components, programs, files, objects, modules, etc. (herein the “program code” that is resident in memory 92) according to various embodiments of the present invention. This program code may include an operating system 104 (e.g., such as a Windows Embedded Compact operating system as distributed by Microsoft Corporation of Redmond, Wash.) as well as one or more software applications (e.g., configured to operate in an operating system or as “stand-alone” applications).


In accordance with various embodiments, the program code may include a prediction software program as hereinafter described. As such, the memory 92 may also be configured with one or more task applications 106. The one or more task applications 106 process messages or task instructions (the “voice assignment”) for the user 76 (e.g., by displaying and/or converting the task messages or task instructions into speech output). The one or more task application(s) 106 implement a dialog flow. The task application(s) 106 communicate with the server 12 to receive task messages or task instructions. In turn, the task application(s) 106 may capture speech input for subsequent conversion to a useable digital format (e.g., machine readable input) by application(s) 46 to the server 12 (e.g., to update the database 48 of the server 12). As noted previously, the speech input may be a spoken subset or user confirmation as hereinafter described. In the context of a workflow process, according to various embodiments, the processor of the mobile computing device may receive a voice assignment to perform the workflow process comprising at least one workflow stage. The processor may identify a task that is to be performed by a user, the task being at least a portion of the workflow process.


Referring now to FIG. 5, according various embodiments, the auto-complete method 100 for spoken complete value entries comprises collecting one or more possible complete value entries (step 105). The possible complete value entries may be collected from common responses defined by the user or system 10, (verbal) responses expected to be received from the user in a particular context (e.g., the workflow process, the least one workflow stage, and/or the particular task being performed by the user. The possible complete value entries can be based on other options. The possible complete value entries may be stored in the memory of the server or the memory of the mobile computing device and used to form a suggestion list for purposes as hereinafter described. The collection of one or more possible value entries may be performed at any time prior to receiving the possible complete value entry (step 110) as hereinafter described. It is be understood that the collection of the one or more possible value entries may only need to be performed once with the suggestion list prepared therefrom as hereinafter described used multiple times.


In the context a performing a workflow process, when the host system (server) sends down the voice assignment, it can optionally send the list of possible responses (or expected responses) to the mobile device.


Still referring to FIG. 5, according to various embodiments, the auto-complete method 100 for spoken complete value entries comprises receiving a possible complete value entry having a unique subset (step 110). The processor 90 of the mobile computing device 70 is configured to receive the possible complete long value entry having the unique subset from the server 12 or elsewhere (e.g., its own memory). Receiving a possible complete value entry comprises receiving the one or more possible complete value entries in the suggestion list. Each of the possible complete value entries has a unique subset configured to match with a particular spoken subset as hereinafter described. The unique subset is a predetermined portion of the possible complete value entry. The number of characters in a unique subset is less than the number of characters in the possible complete value entry. The unique subsets of the possible complete value entries may be stored in the memory of the server or the memory of the mobile computing device.


For example only, the user may be assigned the task of inspecting vehicles in a workflow process. Prior to each vehicle inspection, the user may be prompted to speak the vehicle identification number (VIN) of the particular vehicle. In this exemplary context, the vehicle identification number of each of the vehicles to be inspected may be considered exemplary expected responses to be received in the context of the particular workflow process, the at least one workflow stage, and/or the particular task being performed by the user. The vehicle identification numbers may thus be possible complete value entries in the suggestion list.


Still referring to FIG. 5, according to various embodiments, the auto-complete method 100 for spoken complete value entries comprises prompting a user to speak a spoken complete value entry (step 120). The processor of the mobile computing device is configured to prompt the user to speak the complete value entry. For example, the server 12 transmits task messages or task instructions to the mobile computing device 70 to perform a task. The processor 90 of the mobile computing device 70 receives the task messages or task instructions from the server 12 and prompts the user for a spoken complete value entry. While task messages and task instructions have been described as possible preludes for a spoken complete value entry from the user, it is to be understood that the processor 90 of the mobile computing device 70 may prompt the user for a spoken complete value entry that does not involve a task at all. For example, a prompt may be for a credit card number, a phone number, a VIN, etc.


Still referring to FIG. 5, according to various embodiments, the auto-complete method 100 for spoken complete value entries comprises receiving a spoken subset of the spoken complete value entry (step 130). When prompted for the spoken complete value entry (step 120), the user begins speaking the complete value entry (i.e., the string of characters that make up the complete value entry). The processor of the mobile computing device receives the spoken subset from the headset when the user speaks into the one or more microphones of the headset. The spoken subset comprises a predetermined minimum number of characters of the complete value entry. The number of characters in a spoken complete value entry is greater than the number of characters in the spoken subset thereof. The predetermined minimum of characters (i.e., the spoken subset) comprises one or more sequential characters in the string of characters of the spoken complete long value. The predetermined minimum number of characters in the spoken subset may be (pre) determined in order to differentiate between possible complete value entries having at least one common character. For example, the suggestion list of possible complete value entries may include the following three possible complete value entries: 1234567890222222222, 1245678903111111111, 1345678900200000000. In this example, as two of the possible complete value entries share the first characters 1 and 2, the predetermined minimum of characters of the spoken subset may be three. The predetermined number of characters may be selected in a field provided by the prediction software program.


Still referring to FIG. 5, according to various embodiments, the auto-complete method 100 for spoken complete value entries comprises comparing the spoken subset with the unique subset of the possible complete value entry (step 140). The processor of the mobile computing device compares the spoken subset with the unique subset of the possible complete value entry. After the predetermined minimum number of characters (i.e., the spoken subset) has been spoken by the user, the processor compares the spoken subset against the possible complete value entries in the suggestion list. More particularly, the processor, configured by the prediction software program, compares the spoken subset against the unique subset of each of the possible complete value entries. The prediction software program predicts the spoken complete value entry that a user intends to speak after only the predetermined minimum number of characters (i.e., the spoken subset) has been spoken by the user. The prediction software program predicts the complete value entry by matching the spoken subset with the unique subset of the one or more possible complete value entries. It is to be understood that the greater the predetermined number of characters in the spoken subset (and in the unique subset), the suggestion of the spoken complete value entry is more apt to be correct. The complete value entry is the possible complete value entry having the unique subset that matches (i.e., is the same as) the spoken subset of the complete value entry. The unique subset and the spoken subset “match” by having the same characters, in the same order, and in the same position, i.e., the unique subset and the spoken subset are identical.


Still referring to FIG. 5, according to various embodiments, the auto-complete method 100 for spoken complete value entries comprises automatically completing the spoken complete value entry (step 150). The spoken complete value entry is automatically completed (an “automatically completed spoken complete value entry”) to match the possible complete value entry if the possible complete value entry having the unique subset that matches the spoken subset is included in the suggestion list and the unique subset thereof matches the spoken subset. If the spoken subset that the user speaks does not match the unique subset at least one of the possible complete value entries on the suggestion list, the processor may alert the user that the spoken subset may be incorrect.


Still referring to FIG. 5, according to various embodiments, the auto-complete method 100 for spoken complete value entries continues by confirming the auto-completed spoken complete value entry with the user (i.e., that the auto-completed spoken complete value entry matches the spoken complete value entry that the user intended to speak when prompted in step 120) (step 160). The auto-completed spoken complete value entry is a suggestion for the spoken complete value entry. The processor of the mobile computing device confirms the auto-completed spoken complete value entry by speaking back the spoken complete value entry. After the mobile computing device (more particularly, the processor thereof) speaks back the auto-completed spoken complete value entry, the mobile computing device may ask the user for user confirmation that the suggestion (the auto-completed spoken complete value entry as spoken back by the mobile computing device) is correct.


The user may accept or decline the suggestion. The user may accept or decline the suggestion in a number of ways (e.g., by the graphical user interface). If the suggestion is accepted, the auto-complete method 100 for a spoken complete value entry ends. More specifically, the mobile computing device may send a signal to the server 12 that the auto-completed spoken complete value entry has been confirmed. The server 12 (more particularly, the business logic application thereof) may then repeat method 100 for another spoken complete value entry.


If the suggestion is declined by the user, at least the comparing and automatically completing steps may be repeated until the user accepts the suggestion. According to various embodiments, if the suggestion is declined by the user (i.e., the user does not confirm the automatically completed spoken complete value entry as the spoken complete value entry), the method further comprises removing the possible complete value entry from the suggestion list so it will not be used again to (incorrectly) automatically complete the spoken value entry (step 170).


If the suggestion is neither accepted nor declined, the processor 90 may be further configured to generate and transmit to the user 76 an alert. The alert may comprise an audible sound, a visual indication, or the like. Additionally, or alternatively, the business logic application may stop until the suggestion is accepted or declined (e.g., the server may discontinue sending task messages and task instructions until the suggestion is accepted or declined).


Based on the foregoing, it is to be appreciated that various embodiments provide auto-correct methods for spoken complete value entries. Various embodiments speed up human-computer interactions. Various embodiments as described herein are especially useful for spoken long value entries and for use in workflow processes, improving workflow efficiencies and easing worker frustration.


A person having ordinary skill in the art will recognize that the environments illustrated in FIGS. 1 through 4 are not intended to limit the scope of various embodiments of the present invention. In particular, the server 12 and the mobile computing system 16 may include fewer or additional components, or alternative configurations, consistent with alternative embodiments of the present invention. Thus, a person having skill in the art will recognize that other alternative hardware and/or software environments may be used without departing from the scope of the present. For example, a person having ordinary skill in the art will appreciate that the server 12 and mobile computing system 16 may include more or fewer applications disposed therein. As such, other alternative hardware and software environments may be used without departing from the scope of embodiments of the present. Moreover, a person having ordinary skill in the art will appreciate that the terminology used to describe various pieces of data, task messages, task instructions, voice dialogs, speech output, speech input, and machine readable input are merely used for purposes of differentiation and are not intended to be limiting. The routines executed to implement the embodiments of the present invention, whether implemented as part of an operating system or a specific application, component, program, object, module or sequence of instructions executed by one or more computing systems will be referred to herein as a “sequence of operations,” a “program product,” or, more simply, “program code.” The program code typically comprises one or more instructions that are resident at various times in various memory and storage devices in a computing system (e.g., the server 12 and/or mobile computing system 16), and that, when read and executed by one or more processors of the mobile computing system, cause that computing system to perform the steps necessary to execute steps, elements, and/or blocks embodying the various aspects of the present.


While the present invention has and hereinafter will be described in the context of fully functioning computing systems, those skilled in the art will appreciate that the various embodiments of the present are capable of being distributed as a program product in a variety of forms, and that the present applies equally regardless of the particular type of computer readable media used to actually carry out the distribution. Examples of computer readable media include but are not limited to physical and tangible recordable type media such as volatile and nonvolatile memory devices, floppy and other removable disks, hard disk drives, optical disks (e.g., CD-ROM's, DVD's, Blu-Ray disks, etc.), among others. In addition, various program code described hereinafter may be identified based upon the application or software component within which it is implemented in a specific embodiment of the present. However, it should be appreciated that any particular program nomenclature that follows is used merely for convenience, and thus the present should not be limited to use solely in any specific application identified and/or implied by such nomenclature. Furthermore, given the typically endless number of manners in which computer programs may be organized into routines, procedures, methods, modules, objects, and the like, as well as the various manners in which program functionality may be allocated among various software layers that are resident within a typical computer (e.g., operating systems, libraries, APIs, applications, applets, etc.), it should be appreciated that the present is not limited to the specific organization and allocation of program functionality described herein.


* * *

To supplement the present disclosure, this application incorporates entirely by reference the following commonly assigned patents, patent application publications, and patent applications:


U.S. Pat. Nos. 6,832,725; 7,128,266; 7,159,783; 7,413,127; 7,726,575; 8,294,969; 8,317,105; 8,322,622; 8,366,005; 8,371,507; 8,376,233; 8,381,979; 8,390,909; 8,408,464; 8,408,468; 8,408,469; 8,424,768; 8,448,863; 8,457,013; 8,459,557; 8,469,272; 8,474,712; 8,479,992; 8,490,877; 8,517,271; 8,523,076; 8,528,818; 8,544,737; 8,548,242; 8,548,420; 8,550,335; 8,550,354; 8,550,357; 8,556,174; 8,556,176; 8,556,177; 8,559,767; 8,599,957; 8,561,895; 8,561,903; 8,561,905; 8,565,107; 8,571,307; 8,579,200; 8,583,924; 8,584,945; 8,587,595; 8,587,697; 8,588,869; 8,590,789; 8,596,539; 8,596,542; 8,596,543; 8,599,271; 8,599,957; 8,600,158; 8,600,167; 8,602,309; 8,608,053; 8,608,071; 8,611,309; 8,615,487; 8,616,454; 8,621,123; 8,622,303; 8,628,013; 8,628,015; 8,628,016; 8,629,926; 8,630,491; 8,635,309; 8,636,200; 8,636,212; 8,636,215; 8,636,224; 8,638,806; 8,640,958; 8,640,960; 8,643,717; 8,646,692; 8,646,694; 8,657,200; 8,659,397; 8,668,149; 8,678,285; 8,678,286; 8,682,077; 8,687,282; 8,692,927; 8,695,880; 8,698,949; 8,717,494; 8,717,494; 8,720,783; 8,723,804; 8,723,904; 8,727,223; 8,702,237; 8,740,082; 8,740,085; 8,746,563; 8,750,445; 8,752,766; 8,756,059; 8,757,495; 8,760,563; 8,763,909; 8,777,108; 8,777,109; 8,779,898; 8,781,520; 8,783,573; 8,789,757; 8,789,758; 8,789,759; 8,794,520; 8,794,522; 8,794,525; 8,794,526; 8,798,367; 8,807,431; 8,807,432; 8,820,630; 8,822,848; 8,824,692; 8,824,696; 8,842,849; 8,844,822; 8,844,823; 8,849,019; 8,851,383; 8,854,633; 8,866,963; 8,868,421; 8,868,519; 8,868,802; 8,868,803; 8,870,074; 8,879,639; 8,880,426; 8,881,983; 8,881,987; 8,903,172; 8,908,995; 8,910,870; 8,910,875; 8,914,290; 8,914,788; 8,915,439; 8,915,444; 8,916,789; 8,918,250; 8,918,564; 8,925,818; 8,939,374; 8,942,480; 8,944,313; 8,944,327; 8,944,332; 8,950,678; 8,967,468; 8,971,346; 8,976,030; 8,976,368; 8,978,981; 8,978,983; 8,978,984; 8,985,456; 8,985,457; 8,985,459; 8,985,461; 8,988,578; 8,988,590; 8,991,704; 8,996,194; 8,996,384; 9,002,641; 9,007,368; 9,010,641; 9,015,513; 9,016,576; 9,022,288; 9,030,964; 9,033,240; 9,033,242; 9,036,054; 9,037,344; 9,038,911; 9,038,915; 9,047,098; 9,047,359; 9,047,420; 9,047,525; 9,047,531; 9,053,055; 9,053,378; 9,053,380; 9,058,526; 9,064,165; 9,064,167; 9,064,168; 9,064,254; 9,066,032; 9,070,032;


U.S. Design Patent No. D716,285;


U.S. Design Patent No. D723,560;


U.S. Design Patent No. D730,357;


U.S. Design Patent No. D730,901;


U.S. Design Patent No. D730,902;


U.S. Design Patent No. D733,112;


U.S. Design Patent No. D734,339;


International Publication No. 2013/163789;


International Publication No. 2013/173985;


International Publication No. 2014/019130;


International Publication No. 2014/110495;


U.S. Patent Application Publication No. 2008/0185432;


U.S. Patent Application Publication No. 2009/0134221;


U.S. Patent Application Publication No. 2010/0177080;


U.S. Patent Application Publication No. 2010/0177076;


U.S. Patent Application Publication No. 2010/0177707;


U.S. Patent Application Publication No. 2010/0177749;


U.S. Patent Application Publication No. 2010/0265880;


U.S. Patent Application Publication No. 2011/0202554;


U.S. Patent Application Publication No. 2012/0111946;


U.S. Patent Application Publication No. 2012/0168511;


U.S. Patent Application Publication No. 2012/0168512;


U.S. Patent Application Publication No. 2012/0193423;


U.S. Patent Application Publication No. 2012/0203647;


U.S. Patent Application Publication No. 2012/0223141;


U.S. Patent Application Publication No. 2012/0228382;


U.S. Patent Application Publication No. 2012/0248188;


U.S. Patent Application Publication No. 2013/0043312;


U.S. Patent Application Publication No. 2013/0082104;


U.S. Patent Application Publication No. 2013/0175341;


U.S. Patent Application Publication No. 2013/0175343;


U.S. Patent Application Publication No. 2013/0257744;


U.S. Patent Application Publication No. 2013/0257759;


U.S. Patent Application Publication No. 2013/0270346;


U.S. Patent Application Publication No. 2013/0287258;


U.S. Patent Application Publication No. 2013/0292475;


U.S. Patent Application Publication No. 2013/0292477;


U.S. Patent Application Publication No. 2013/0293539;


U.S. Patent Application Publication No. 2013/0293540;


U.S. Patent Application Publication No. 2013/0306728;


U.S. Patent Application Publication No. 2013/0306731;


U.S. Patent Application Publication No. 2013/0307964;


U.S. Patent Application Publication No. 2013/0308625;


U.S. Patent Application Publication No. 2013/0313324;


U.S. Patent Application Publication No. 2013/0313325;


U.S. Patent Application Publication No. 2013/0342717;


U.S. Patent Application Publication No. 2014/0001267;


U.S. Patent Application Publication No. 2014/0008439;


U.S. Patent Application Publication No. 2014/0025584;


U.S. Patent Application Publication No. 2014/0034734;


U.S. Patent Application Publication No. 2014/0036848;


U.S. Patent Application Publication No. 2014/0039693;


U.S. Patent Application Publication No. 2014/0042814;


U.S. Patent Application Publication No. 2014/0049120;


U.S. Patent Application Publication No. 2014/0049635;


U.S. Patent Application Publication No. 2014/0061306;


U.S. Patent Application Publication No. 2014/0063289;


U.S. Patent Application Publication No. 2014/0066136;


U.S. Patent Application Publication No. 2014/0067692;


U.S. Patent Application Publication No. 2014/0070005;


U.S. Patent Application Publication No. 2014/0071840;


U.S. Patent Application Publication No. 2014/0074746;


U.S. Patent Application Publication No. 2014/0076974;


U.S. Patent Application Publication No. 2014/0078341;


U.S. Patent Application Publication No. 2014/0078345;


U.S. Patent Application Publication No. 2014/0097249;


U.S. Patent Application Publication No. 2014/0098792;


U.S. Patent Application Publication No. 2014/0100813;


U.S. Patent Application Publication No. 2014/0103115;


U.S. Patent Application Publication No. 2014/0104413;


U.S. Patent Application Publication No. 2014/0104414;


U.S. Patent Application Publication No. 2014/0104416;


U.S. Patent Application Publication No. 2014/0104451;


U.S. Patent Application Publication No. 2014/0106594;


U.S. Patent Application Publication No. 2014/0106725;


U.S. Patent Application Publication No. 2014/0108010;


U.S. Patent Application Publication No. 2014/0108402;


U.S. Patent Application Publication No. 2014/0110485;


U.S. Patent Application Publication No. 2014/0114530;


U.S. Patent Application Publication No. 2014/0124577;


U.S. Patent Application Publication No. 2014/0124579;


U.S. Patent Application Publication No. 2014/0125842;


U.S. Patent Application Publication No. 2014/0125853;


U.S. Patent Application Publication No. 2014/0125999;


U.S. Patent Application Publication No. 2014/0129378;


U.S. Patent Application Publication No. 2014/0131438;


U.S. Patent Application Publication No. 2014/0131441;


U.S. Patent Application Publication No. 2014/0131443;


U.S. Patent Application Publication No. 2014/0131444;


U.S. Patent Application Publication No. 2014/0131445;


U.S. Patent Application Publication No. 2014/0131448;


U.S. Patent Application Publication No. 2014/0133379;


U.S. Patent Application Publication No. 2014/0136208;


U.S. Patent Application Publication No. 2014/0140585;


U.S. Patent Application Publication No. 2014/0151453;


U.S. Patent Application Publication No. 2014/0152882;


U.S. Patent Application Publication No. 2014/0158770;


U.S. Patent Application Publication No. 2014/0159869;


U.S. Patent Application Publication No. 2014/0166755;


U.S. Patent Application Publication No. 2014/0166759;


U.S. Patent Application Publication No. 2014/0168787;


U.S. Patent Application Publication No. 2014/0175165;


U.S. Patent Application Publication No. 2014/0175172;


U.S. Patent Application Publication No. 2014/0191644;


U.S. Patent Application Publication No. 2014/0191913;


U.S. Patent Application Publication No. 2014/0197238;


U.S. Patent Application Publication No. 2014/0197239;


U.S. Patent Application Publication No. 2014/0197304;


U.S. Patent Application Publication No. 2014/0214631;


U.S. Patent Application Publication No. 2014/0217166;


U.S. Patent Application Publication No. 2014/0217180;


U.S. Patent Application Publication No. 2014/0231500;


U.S. Patent Application Publication No. 2014/0232930;


U.S. Patent Application Publication No. 2014/0247315;


U.S. Patent Application Publication No. 2014/0263493;


U.S. Patent Application Publication No. 2014/0263645;


U.S. Patent Application Publication No. 2014/0267609;


U.S. Patent Application Publication No. 2014/0270196;


U.S. Patent Application Publication No. 2014/0270229;


U.S. Patent Application Publication No. 2014/0278387;


U.S. Patent Application Publication No. 2014/0278391;


U.S. Patent Application Publication No. 2014/0282210;


U.S. Patent Application Publication No. 2014/0284384;


U.S. Patent Application Publication No. 2014/0288933;


U.S. Patent Application Publication No. 2014/0297058;


U.S. Patent Application Publication No. 2014/0299665;


U.S. Patent Application Publication No. 2014/0312121;


U.S. Patent Application Publication No. 2014/0319220;


U.S. Patent Application Publication No. 2014/0319221;


U.S. Patent Application Publication No. 2014/0326787;


U.S. Patent Application Publication No. 2014/0332590;


U.S. Patent Application Publication No. 2014/0344943;


U.S. Patent Application Publication No. 2014/0346233;


U.S. Patent Application Publication No. 2014/0351317;


U.S. Patent Application Publication No. 2014/0353373;


U.S. Patent Application Publication No. 2014/0361073;


U.S. Patent Application Publication No. 2014/0361082;


U.S. Patent Application Publication No. 2014/0362184;


U.S. Patent Application Publication No. 2014/0363015;


U.S. Patent Application Publication No. 2014/0369511;


U.S. Patent Application Publication No. 2014/0374483;


U.S. Patent Application Publication No. 2014/0374485;


U.S. Patent Application Publication No. 2015/0001301;


U.S. Patent Application Publication No. 2015/0001304;


U.S. Patent Application Publication No. 2015/0003673;


U.S. Patent Application Publication No. 2015/0009338;


U.S. Patent Application Publication No. 2015/0009610;


U.S. Patent Application Publication No. 2015/0014416;


U.S. Patent Application Publication No. 2015/0021397;


U.S. Patent Application Publication No. 2015/0028102;


U.S. Patent Application Publication No. 2015/0028103;


U.S. Patent Application Publication No. 2015/0028104;


U.S. Patent Application Publication No. 2015/0029002;


U.S. Patent Application Publication No. 2015/0032709;


U.S. Patent Application Publication No. 2015/0039309;


U.S. Patent Application Publication No. 2015/0039878;


U.S. Patent Application Publication No. 2015/0040378;


U.S. Patent Application Publication No. 2015/0048168;


U.S. Patent Application Publication No. 2015/0049347;


U.S. Patent Application Publication No. 2015/0051992;


U.S. Patent Application Publication No. 2015/0053766;


U.S. Patent Application Publication No. 2015/0053768;


U.S. Patent Application Publication No. 2015/0053769;


U.S. Patent Application Publication No. 2015/0060544;


U.S. Patent Application Publication No. 2015/0062366;


U.S. Patent Application Publication No. 2015/0063215;


U.S. Patent Application Publication No. 2015/0063676;


U.S. Patent Application Publication No. 2015/0069130;


U.S. Patent Application Publication No. 2015/0071819;


U.S. Patent Application Publication No. 2015/0083800;


U.S. Patent Application Publication No. 2015/0086114;


U.S. Patent Application Publication No. 2015/0088522;


U.S. Patent Application Publication No. 2015/0096872;


U.S. Patent Application Publication No. 2015/0099557;


U.S. Patent Application Publication No. 2015/0100196;


U.S. Patent Application Publication No. 2015/0102109;


U.S. Patent Application Publication No. 2015/0115035;


U.S. Patent Application Publication No. 2015/0127791;


U.S. Patent Application Publication No. 2015/0128116;


U.S. Patent Application Publication No. 2015/0129659;


U.S. Patent Application Publication No. 2015/0133047;


U.S. Patent Application Publication No. 2015/0134470;


U.S. Patent Application Publication No. 2015/0136851;


U.S. Patent Application Publication No. 2015/0136854;


U.S. Patent Application Publication No. 2015/0142492;


U.S. Patent Application Publication No. 2015/0144692;


U.S. Patent Application Publication No. 2015/0144698;


U.S. Patent Application Publication No. 2015/0144701;


U.S. Patent Application Publication No. 2015/0149946;


U.S. Patent Application Publication No. 2015/0161429;


U.S. Patent Application Publication No. 2015/0169925;


U.S. Patent Application Publication No. 2015/0169929;


U.S. Patent Application Publication No. 2015/0178523;


U.S. Patent Application Publication No. 2015/0178534;


U.S. Patent Application Publication No. 2015/0178535;


U.S. Patent Application Publication No. 2015/0178536;


U.S. Patent Application Publication No. 2015/0178537;


U.S. Patent Application Publication No. 2015/0181093;


U.S. Patent Application Publication No. 2015/0181109;

  • U.S. patent application Ser. No. 13/367,978 for a Laser Scanning Module Employing an Elastomeric U-Hinge Based Laser Scanning Assembly, filed Feb. 7, 2012 (Feng et al.);
  • U.S. patent application Ser. No. 29/458,405 for an Electronic Device, filed Jun. 19, 2013 (Fitch et al.);
  • U.S. patent application Ser. No. 29/459,620 for an Electronic Device Enclosure, filed Jul. 2, 2013 (London et al.);
  • U.S. patent application Ser. No. 29/468,118 for an Electronic Device Case, filed Sep. 26, 2013 (Oberpriller et al.);
  • U.S. patent application Ser. No. 14/150,393 for Indicia-reader Having Unitary Construction Scanner, filed Jan. 8, 2014 (Colavito et al.);
  • U.S. patent application Ser. No. 14/200,405 for Indicia Reader for Size-Limited Applications filed Mar. 7, 2014 (Feng et al.);
  • U.S. patent application Ser. No. 14/231,898 for Hand-Mounted Indicia-Reading Device with Finger Motion Triggering filed Apr. 1, 2014 (Van Horn et al.);
  • U.S. patent application Ser. No. 29/486,759 for an Imaging Terminal, filed Apr. 2, 2014 (Oberpriller et al.);
  • U.S. patent application Ser. No. 14/257,364 for Docking System and Method Using Near Field Communication filed Apr. 21, 2014 (Showering);
  • U.S. patent application Ser. No. 14/264,173 for Autofocus Lens System for Indicia Readers filed Apr. 29, 2014 (Ackley et al.);
  • U.S. patent application Ser. No. 14/277,337 for MULTIPURPOSE OPTICAL READER, filed May 14, 2014 (Jovanovski et al.);
  • U.S. patent application Ser. No. 14/283,282 for TERMINAL HAVING ILLUMINATION AND FOCUS CONTROL filed May 21, 2014 (Liu et al.);
  • U.S. patent application Ser. No. 14/327,827 for a MOBILE-PHONE ADAPTER FOR ELECTRONIC TRANSACTIONS, filed Jul. 10, 2014 (Hejl);
  • U.S. patent application Ser. No. 14/334,934 for a SYSTEM AND METHOD FOR INDICIA VERIFICATION, filed Jul. 18, 2014 (Hejl);
  • U.S. patent application Ser. No. 14/339,708 for LASER SCANNING CODE SYMBOL READING SYSTEM, filed Jul. 24, 2014 (Xian et al.);
  • U.S. patent application Ser. No. 14/340,627 for an AXIALLY REINFORCED FLEXIBLE SCAN ELEMENT, filed Jul. 25, 2014 (Rueblinger et al.);
  • U.S. patent application Ser. No. 14/446,391 for MULTIFUNCTION POINT OF SALE APPARATUS WITH OPTICAL SIGNATURE CAPTURE filed Jul. 30, 2014 (Good et al.);
  • U.S. patent application Ser. No. 14/452,697 for INTERACTIVE INDICIA READER, filed Aug. 6, 2014 (Todeschini);
  • U.S. patent application Ser. No. 14/453,019 for DIMENSIONING SYSTEM WITH GUIDED ALIGNMENT, filed Aug. 6, 2014 (Li et al.);
  • U.S. patent application Ser. No. 14/462,801 for MOBILE COMPUTING DEVICE WITH DATA COGNITION SOFTWARE, filed on Aug. 19, 2014 (Todeschini et al.);
  • U.S. patent application Ser. No. 14/483,056 for VARIABLE DEPTH OF FIELD BARCODE SCANNER filed Sep. 10, 2014 (McCloskey et al.);
  • U.S. patent application Ser. No. 14/513,808 for IDENTIFYING INVENTORY ITEMS IN A STORAGE FACILITY filed Oct. 14, 2014 (Singel et al.);
  • U.S. patent application Ser. No. 14/519,195 for HANDHELD DIMENSIONING SYSTEM WITH FEEDBACK filed Oct. 21, 2014 (Laffargue et al.);
  • U.S. patent application Ser. No. 14/519,179 for DIMENSIONING SYSTEM WITH MULTIPATH INTERFERENCE MITIGATION filed Oct. 21, 2014 (Thuries et al.);
  • U.S. patent application Ser. No. 14/519,211 for SYSTEM AND METHOD FOR DIMENSIONING filed Oct. 21, 2014 (Ackley et al.);
  • U.S. patent application Ser. No. 14/519,233 for HANDHELD DIMENSIONER WITH DATA-QUALITY INDICATION filed Oct. 21, 2014 (Laffargue et al.);
  • U.S. patent application Ser. No. 14/519,249 for HANDHELD DIMENSIONING SYSTEM WITH MEASUREMENT-CONFORMANCE FEEDBACK filed Oct. 21, 2014 (Ackley et al.);
  • U.S. patent application Ser. No. 14/527,191 for METHOD AND SYSTEM FOR RECOGNIZING SPEECH USING WILDCARDS IN AN EXPECTED RESPONSE filed Oct. 29, 2014 (Braho et al.);
  • U.S. patent application Ser. No. 14/529,563 for ADAPTABLE INTERFACE FOR A MOBILE COMPUTING DEVICE filed Oct. 31, 2014 (Schoon et al.);
  • U.S. patent application Ser. No. 14/529,857 for BARCODE READER WITH SECURITY FEATURES filed Oct. 31, 2014 (Todeschini et al.);
  • U.S. patent application Ser. No. 14/398,542 for PORTABLE ELECTRONIC DEVICES HAVING A SEPARATE LOCATION TRIGGER UNIT FOR USE IN CONTROLLING AN APPLICATION UNIT filed Nov. 3, 2014 (Bian et al.);
  • U.S. patent application Ser. No. 14/531,154 for DIRECTING AN INSPECTOR THROUGH AN INSPECTION filed Nov. 3, 2014 (Miller et al.);
  • U.S. patent application Ser. No. 14/533,319 for BARCODE SCANNING SYSTEM USING WEARABLE DEVICE WITH EMBEDDED CAMERA filed Nov. 5, 2014 (Todeschini);
  • U.S. patent application Ser. No. 14/535,764 for CONCATENATED EXPECTED RESPONSES FOR SPEECH RECOGNITION filed Nov. 7, 2014 (Braho et al.);
  • U.S. patent application Ser. No. 14/568,305 for AUTO-CONTRAST VIEWFINDER FOR AN INDICIA READER filed Dec. 12, 2014 (Todeschini);
  • U.S. patent application Ser. No. 14/573,022 for DYNAMIC DIAGNOSTIC INDICATOR GENERATION filed Dec. 17, 2014 (Goldsmith);
  • U.S. patent application Ser. No. 14/578,627 for SAFETY SYSTEM AND METHOD filed Dec. 22, 2014 (Ackley et al.);
  • U.S. patent application Ser. No. 14/580,262 for MEDIA GATE FOR THERMAL TRANSFER PRINTERS filed Dec. 23, 2014 (Bowles);
  • U.S. patent application Ser. No. 14/590,024 for SHELVING AND PACKAGE LOCATING SYSTEMS FOR DELIVERY VEHICLES filed Jan. 6, 2015 (Payne);
  • U.S. patent application Ser. No. 14/596,757 for SYSTEM AND METHOD FOR DETECTING BARCODE PRINTING ERRORS filed Jan. 14, 2015 (Ackley);
  • U.S. patent application Ser. No. 14/416,147 for OPTICAL READING APPARATUS HAVING VARIABLE SETTINGS filed Jan. 21, 2015 (Chen et al.);
  • U.S. patent application Ser. No. 14/614,706 for DEVICE FOR SUPPORTING AN ELECTRONIC TOOL ON A USER′S HAND filed Feb. 5, 2015 (Oberpriller et al.);
  • U.S. patent application Ser. No. 14/614,796 for CARGO APPORTIONMENT TECHNIQUES filed Feb. 5, 2015 (Morton et al.);
  • U.S. patent application Ser. No. 29/516,892 for TABLE COMPUTER filed Feb. 6, 2015 (Bidwell et al.);
  • U.S. patent application Ser. No. 14/619,093 for METHODS FOR TRAINING A SPEECH RECOGNITION SYSTEM filed Feb. 11, 2015 (Pecorari);
  • U.S. patent application Ser. No. 14/628,708 for DEVICE, SYSTEM, AND METHOD FOR DETERMINING THE STATUS OF CHECKOUT LANES filed Feb. 23, 2015 (Todeschini);
  • U.S. patent application Ser. No. 14/630,841 for TERMINAL INCLUDING IMAGING ASSEMBLY filed Feb. 25, 2015 (Gomez et al.);
  • U.S. patent application Ser. No. 14/635,346 for SYSTEM AND METHOD FOR RELIABLE STORE-AND-FORWARD DATA HANDLING BY ENCODED INFORMATION READING TERMINALS filed Mar. 2, 2015 (Sevier);
  • U.S. patent application Ser. No. 29/519,017 for SCANNER filed Mar. 2, 2015 (Zhou et al.);
  • U.S. patent application Ser. No. 14/405,278 for DESIGN PATTERN FOR SECURE STORE filed Mar. 9, 2015 (Zhu et al.);
  • U.S. patent application Ser. No. 14/660,970 for DECODABLE INDICIA READING TERMINAL WITH COMBINED ILLUMINATION filed Mar. 18, 2015 (Kearney et al.);
  • U.S. patent application Ser. No. 14/661,013 for REPROGRAMMING SYSTEM AND METHOD FOR DEVICES INCLUDING PROGRAMMING SYMBOL filed Mar. 18, 2015 (Soule et al.);
  • U.S. patent application Ser. No. 14/662,922 for MULTIFUNCTION POINT OF SALE SYSTEM filed Mar. 19, 2015 (Van Horn et al.);
  • U.S. patent application Ser. No. 14/663,638 for VEHICLE MOUNT COMPUTER WITH CONFIGURABLE IGNITION SWITCH BEHAVIOR filed Mar. 20, 2015 (Davis et al.);
  • U.S. patent application Ser. No. 14/664,063 for METHOD AND APPLICATION FOR SCANNING A BARCODE WITH A SMART DEVICE WHILE CONTINUOUSLY RUNNING AND DISPLAYING AN APPLICATION ON THE SMART DEVICE DISPLAY filed Mar. 20, 2015 (Todeschini);
  • U.S. patent application Ser. No. 14/669,280 for TRANSFORMING COMPONENTS OF A WEB PAGE TO VOICE PROMPTS filed Mar. 26, 2015 (Funyak et al.);
  • U.S. patent application Ser. No. 14/674,329 for AIMER FOR BARCODE SCANNING filed Mar. 31, 2015 (Bidwell);
  • U.S. patent application Ser. No. 14/676,109 for INDICIA READER filed Apr. 1, 2015 (Huck);
  • U.S. patent application Ser. No. 14/676,327 for DEVICE MANAGEMENT PROXY FOR SECURE DEVICES filed Apr. 1, 2015 (Yeakley et al.);
  • U.S. patent application Ser. No. 14/676,898 for NAVIGATION SYSTEM CONFIGURED TO INTEGRATE MOTION SENSING DEVICE INPUTS filed Apr. 2, 2015 (Showering);
  • U.S. patent application Ser. No. 14/679,275 for DIMENSIONING SYSTEM CALIBRATION SYSTEMS AND METHODS filed Apr. 6, 2015 (Laffargue et al.);
  • U.S. patent application Ser. No. 29/523,098 for HANDLE FOR A TABLET COMPUTER filed Apr. 7, 2015 (Bidwell et al.);
  • U.S. patent application Ser. No. 14/682,615 for SYSTEM AND METHOD FOR POWER MANAGEMENT OF MOBILE DEVICES filed Apr. 9, 2015 (Murawski et al.);
  • U.S. patent application Ser. No. 14/686,822 for MULTIPLE PLATFORM SUPPORT SYSTEM AND METHOD filed Apr. 15, 2015 (Qu et al.);
  • U.S. patent application Ser. No. 14/687,289 for SYSTEM FOR COMMUNICATION VIA A PERIPHERAL HUB filed Apr. 15, 2015 (Kohtz et al.);
  • U.S. patent application Ser. No. 29/524,186 for SCANNER filed Apr. 17, 2015 (Zhou et al.);
  • U.S. patent application Ser. No. 14/695,364 for MEDICATION MANAGEMENT SYSTEM filed Apr. 24, 2015 (Sewell et al.);
  • U.S. patent application Ser. No. 14/695,923 for SECURE UNATTENDED NETWORK AUTHENTICATION filed Apr. 24, 2015 (Kubler et al.);
  • U.S. patent application Ser. No. 29/525,068 for TABLET COMPUTER WITH REMOVABLE SCANNING DEVICE filed Apr. 27, 2015 (Schulte et al.);
  • U.S. patent application Ser. No. 14/699,436 for SYMBOL READING SYSTEM HAVING PREDICTIVE DIAGNOSTICS filed Apr. 29, 2015 (Nahill et al.);
  • U.S. patent application Ser. No. 14/702,110 for SYSTEM AND METHOD FOR REGULATING BARCODE DATA INJECTION INTO A RUNNING APPLICATION ON A SMART DEVICE filed May 1, 2015 (Todeschini et al.);
  • U.S. patent application Ser. No. 14/702,979 for TRACKING BATTERY CONDITIONS filed May 4, 2015 (Young et al.);
  • U.S. patent application Ser. No. 14/704,050 for INTERMEDIATE LINEAR POSITIONING filed May 5, 2015 (Charpentier et al.);
  • U.S. patent application Ser. No. 14/705,012 for HANDS-FREE HUMAN MACHINE INTERFACE RESPONSIVE TO A DRIVER OF A VEHICLE filed May 6, 2015 (Fitch et al.);
  • U.S. patent application Ser. No. 14/705,407 for METHOD AND SYSTEM TO PROTECT SOFTWARE-BASED NETWORK-CONNECTED DEVICES FROM ADVANCED PERSISTENT THREAT filed May 6, 2015 (Hussey et al.);
  • U.S. patent application Ser. No. 14/707,037 for SYSTEM AND METHOD FOR DISPLAY OF INFORMATION USING A VEHICLE-MOUNT COMPUTER filed May 8, 2015 (Chamberlin);
  • U.S. patent application Ser. No. 14/707,123 for APPLICATION INDEPENDENT DEX/UCS INTERFACE filed May 8, 2015 (Pape);
  • U.S. patent application Ser. No. 14/707,492 for METHOD AND APPARATUS FOR READING OPTICAL INDICIA USING A PLURALITY OF DATA SOURCES filed May 8, 2015 (Smith et al.);
  • U.S. patent application Ser. No. 14/710,666 for PRE-PAID USAGE SYSTEM FOR ENCODED INFORMATION READING TERMINALS filed May 13, 2015 (Smith);
  • U.S. patent application Ser. No. 29/526,918 for CHARGING BASE filed May 14, 2015 (Fitch et al.);
  • U.S. patent application Ser. No. 14/715,672 for AUGUMENTED REALITY ENABLED HAZARD DISPLAY filed May 19, 2015 (Venkatesha et al.);
  • U.S. patent application Ser. No. 14/715,916 for EVALUATING IMAGE VALUES filed May 19, 2015 (Ackley);
  • U.S. patent application Ser. No. 14/722,608 for INTERACTIVE USER INTERFACE FOR CAPTURING A DOCUMENT IN AN IMAGE SIGNAL filed May 27, 2015 (Showering et al.);
  • U.S. patent application Ser. No. 29/528,165 for IN-COUNTER BARCODE SCANNER filed May 27, 2015 (Oberpriller et al.);
  • U.S. patent application Ser. No. 14/724,134 for ELECTRONIC DEVICE WITH WIRELESS PATH SELECTION CAPABILITY filed May 28, 2015 (Wang et al.);
  • U.S. patent application Ser. No. 14/724,849 for METHOD OF PROGRAMMING THE DEFAULT CABLE INTERFACE SOFTWARE IN AN INDICIA READING DEVICE filed May 29, 2015 (Barten);
  • U.S. patent application Ser. No. 14/724,908 for IMAGING APPARATUS HAVING IMAGING ASSEMBLY filed May 29, 2015 (Barber et al.);
  • U.S. patent application Ser. No. 14/725,352 for APPARATUS AND METHODS FOR MONITORING ONE OR MORE PORTABLE DATA TERMINALS (Caballero et al.);
  • U.S. patent application Ser. No. 29/528,590 for ELECTRONIC DEVICE filed May 29, 2015 (Fitch et al.);
  • U.S. patent application Ser. No. 29/528,890 for MOBILE COMPUTER HOUSING filed Jun. 2, 2015 (Fitch et al.);
  • U.S. patent application Ser. No. 14/728,397 for DEVICE MANAGEMENT USING VIRTUAL INTERFACES CROSS-REFERENCE TO RELATED APPLICATIONS filed Jun. 2, 2015 (Caballero);
  • U.S. patent application Ser. No. 14/732,870 for DATA COLLECTION MODULE AND SYSTEM filed Jun. 8, 2015 (Powilleit);
  • U.S. patent application Ser. No. 29/529,441 for INDICIA READING DEVICE filed Jun. 8, 2015 (Zhou et al.);
  • U.S. patent application Ser. No. 14/735,717 for INDICIA-READING SYSTEMS HAVING AN INTERFACE WITH A USER'S NERVOUS SYSTEM filed Jun. 10, 2015 (Todeschini);
  • U.S. patent application Ser. No. 14/738,038 for METHOD OF AND SYSTEM FOR DETECTING OBJECT WEIGHING INTERFERENCES filed Jun. 12, 2015 (Amundsen et al.);
  • U.S. patent application Ser. No. 14/740,320 for TACTILE SWITCH FOR A MOBILE ELECTRONIC DEVICE filed Jun. 16, 2015 (Bandringa);
  • U.S. patent application Ser. No. 14/740,373 for CALIBRATING A VOLUME DIMENSIONER filed Jun. 16, 2015 (Ackley et al.);
  • U.S. patent application Ser. No. 14/742,818 for INDICIA READING SYSTEM EMPLOYING DIGITAL GAIN CONTROL filed Jun. 18, 2015 (Xian et al.);
  • U.S. patent application Ser. No. 14/743,257 for WIRELESS MESH POINT PORTABLE DATA TERMINAL filed Jun. 18, 2015 (Wang et al.);
  • U.S. patent application Ser. No. 29/530,600 for CYCLONE filed Jun. 18, 2015 (Vargo et al);
  • U.S. patent application Ser. No. 14/744,633 for IMAGING APPARATUS COMPRISING IMAGE SENSOR ARRAY HAVING SHARED GLOBAL SHUTTER CIRCUITRY filed Jun. 19, 2015 (Wang);
  • U.S. patent application Ser. No. 14/744,836 for CLOUD-BASED SYSTEM FOR READING OF DECODABLE INDICIA filed Jun. 19, 2015 (Todeschini et al.);
  • U.S. patent application Ser. No. 14/745,006 for SELECTIVE OUTPUT OF DECODED MESSAGE DATA filed Jun. 19, 2015 (Todeschini et al.);
  • U.S. patent application Ser. No. 14/747,197 for OPTICAL PATTERN PROJECTOR filed Jun. 23, 2015 (Thuries et al.);
  • U.S. patent application Ser. No. 14/747,490 for DUAL-PROJECTOR THREE-DIMENSIONAL SCANNER filed Jun. 23, 2015 (Jovanovski et al.); and
  • U.S. patent application Ser. No. 14/748,446 for CORDLESS INDICIA READER WITH A MULTIFUNCTION COIL FOR WIRELESS CHARGING AND EAS DEACTIVATION, filed Jun. 24, 2015 (Xie et al.).


* * *

In the specification and/or figures, typical embodiments of the invention have been disclosed. The present invention is not limited to such exemplary embodiments. The use of the term “and/or” includes any and all combinations of one or more of the associated listed items. The figures are schematic representations and so are not necessarily drawn to scale. Unless otherwise noted, specific terms have been used in a generic and descriptive sense and not for purposes of limitation.

Claims
  • 1. An auto-complete method comprising: displaying a request for information adjacent to a graphical input field;in response to displaying the request for information, receiving from a user, via a microphone, a spoken response comprising a set of spoken language characters, wherein the spoken response is associated with the request for information;translating a subset of the set of spoken language characters into a set of written characters to create a set of translated written characters associated with the spoken response;displaying the set of translated written characters in the graphical input field;comparing the set of translated written characters to a collection of sets of written characters to determine whether the set of translated written characters matches a portion of the collection of sets of written characters; andin response to matching the set of translated written characters to a first portion of a first set of written characters of the collection of sets of written characters, displaying a second portion that is a remainder of the first set of written characters as appended to the set of translated written characters displayed in the graphical input field.
  • 2. The auto-complete method of claim 1, wherein the second portion is displayed in a first format, and the set of translated written characters is displayed in a second format different from the first format.
  • 3. The auto-complete method of claim 2, wherein the first format is associated with a first text color, and the second format is associated with a second text color.
  • 4. The auto-complete method according to claim 1, further comprising receiving a list of the collection of sets of written characters wherein each set of written characters in the collection of sets of written characters includes a unique subset of characters.
  • 5. The auto-complete method according to claim 4, further comprising receiving the collection of sets of written characters based on collecting sets of written characters from at least one of common responses and responses expected to be received from the user in a particular context.
  • 6. The auto-complete method according to claim 1, further comprising transmitting an alert if the set of translated written characters does not match the portion of the collection of sets of written characters.
  • 7. The auto-complete method according to claim 1, further comprising prompting the user to confirm an automatically completed subset of spoken language characters as the information, after the second portion that is the remainder of the first set of written characters as appended to the set of translated written characters is displayed in the graphical input field.
  • 8. The auto-complete method according to claim 7, wherein prompting the user to confirm the automatically completed subset of spoken language characters comprises reading back the automatically completed subset of spoken language characters by the user.
  • 9. The auto-complete method according to claim 1, further comprising: displaying, in the graphical input field, the second portion that is the remainder of the first set of written characters as a suggestion; andreceiving at least one of a user acceptance or user declination of the suggestion.
  • 10. The auto-complete method according to claim 9, further comprising: upon receiving the user declination repeating, in order, at least comparing the set of translated written characters to the collection of sets of written characters to determine whether the set of translated written characters matches the portion of the collection of sets of written characters; in response to matching the set of translated written characters to another portion of another set of written characters of the collection of sets of written characters, displaying another portion that is a remainder of the another set of written characters as appended to the set of translated written characters displayed in the graphical input field; and removing the first set of written characters from a suggestion list.
  • 11. An auto-complete method comprising: receiving, via a microphone, a spoken response, comprising a set of spoken language characters, wherein the spoken response is associated with a request for information displayed in a graphical input field that is visible to a user;translating a subset of the set of spoken language characters into a set of written characters to create a set of translated written characters associated with the spoken response, wherein the set of translated written characters are adapted to be displayed in the graphical input field to the user;determining the set of translated written characters to match with a portion of, at least, a set from amongst a collection of sets of written characters; andin response to the matching of the set of translated written characters to a first portion of a first set of written characters of the collection of sets of written characters, automatically completing the subset of the set of spoken language characters to match the information, by appending to the set of translated written characters, a second portion that is a remainder of the first set of written characters.
  • 12. The auto-complete method according to claim 11, further comprising displaying the second portion that is the remainder of the first set of written characters as appended to the set of translated written characters in the graphical input field.
  • 13. The auto-complete method of claim 11, further comprising receiving the collection of sets of written characters based on collecting sets of written characters from at least one of common responses and responses expected to be received from the user in a particular context and wherein each set of written characters in the collection of sets of written characters includes a unique subset of characters.
  • 14. The auto-complete method of claim 11, wherein the second portion is displayed in a first format, and the set of translated written characters is displayed in a second format different from the first format and wherein the first format is associated with a first text color, and the second format is associated with a second text color.
  • 15. The auto-complete method of claim 11, further comprising transmitting an alert if the set of translated written characters does not match the first portion of the first set of written characters of the collection of sets of written characters.
  • 16. The auto-complete method according to claim 11, further comprising prompting the user to confirm the automatically completed the subset of the set of spoken language characters as the information based on one of: reading back the automatically completed the subset of the set of spoken language characters by the user; oran input provided by the user on an input interface associated with the graphical input field.
  • 17. A system, comprising: one or more processors configured to: cause a display screen to display a task instruction, wherein the task instruction is configured to prompt a request to a user to speak an information;receive a spoken response from a microphone, wherein the spoken response comprises a set of spoken language characters;translate a subset of the set of spoken language characters to a set of written characters to create a set of translated written characters associated with the spoken response;display the set of translated written characters in a graphical input field of the display screen;compare the set of translated written characters to a collection of sets of written characters to determine whether the set of translated written characters matches a portion of the collection of sets of written characters;determine, based on the comparison of the set of translated written characters to the collection of sets of written characters, that the translated subset of the set of spoken language characters matches with a subset of characters from a set of characters, wherein the subset of characters represents a possible complete value entry from amongst a set of possible complete value entries; andin response to matching the set of translated written characters to a first portion of a first set of written characters of the collection of sets of written characters, cause to display a second portion that is a remainder of the first set of written characters as appended to the set of translated written characters displayed in the graphical input field of the display screen.
  • 18. The system of claim 17, wherein the one or more processors are configured to: cause the second portion to be displayed in a first format and the set of translated written characters is displayed in a second format different from the first format and wherein the first format is associated with a first text color, and the second format is associated with a second text color.
  • 19. The system of claim 17, wherein the one or more processors are configured to: generate and transmit an alert if the set of translated written characters does not match the first portion of the first set of written characters of the collection of sets of written characters.
  • 20. The system of claim 17, wherein the one or more processors are configured to: cause to display the second portion, on the graphical input field, that is the remainder of the first set of written characters as a suggestion;receive one of a user acceptance or user refusal of the suggestion;upon receiving the user acceptance, continue to cause the display of the second portion that is the remainder of the first set of written characters as appended to the set of translated written characters displayed in the graphical input field of the display screen; andupon receiving the user refusal of the suggestion, cause to repeat, in order, at least compare the set of translated written characters to the collection of sets of written characters to determine whether the set of translated written characters matches the portion of the collection of sets of written characters; in response to matching the set of translated written characters to another portion of another set of written characters of the collection of sets of written characters, cause to display another portion that is the remainder of the first set of written characters as appended to the set of translated written characters displayed in the graphical input field of the display screen and remove the first set of written characters from a suggestion list.
CROSS-REFERENCE TO PRIORITY APPLICATION

This application is a continuation of U.S. non-provisional application Ser. No. 15/233,992 for Auto-Complete Methods For Spoken Complete Value Entries filed Aug. 11, 2016, which claims priority to and the benefit of U.S. provisional application Ser. No. 62/2016,884 for Auto-Complete for Spoken Long Values Entry in a Speech Recognition System filed Aug. 19, 2015, each of which is hereby incorporated by reference in its entirety.

US Referenced Citations (479)
Number Name Date Kind
4156868 Levinson May 1979 A
4566065 Toth Jan 1986 A
4947438 Paeseler Aug 1990 A
5956678 Hab-Umbach Sep 1999 A
6832725 Gardiner et al. Dec 2004 B2
7128266 Zhu et al. Oct 2006 B2
7159783 Walczyk et al. Jan 2007 B2
7185271 Lee Feb 2007 B2
7413127 Ehrhart et al. Aug 2008 B2
7460995 Rinscheid Dec 2008 B2
7505906 Lewis Mar 2009 B2
7657423 Harik et al. Feb 2010 B1
7726575 Wang et al. Jun 2010 B2
7904298 Rao Mar 2011 B2
8019602 Yu Sep 2011 B2
8294969 Plesko Oct 2012 B2
8317105 Kotlarsky et al. Nov 2012 B2
8322622 Liu Dec 2012 B2
8366005 Kotlarsky et al. Feb 2013 B2
8371507 Haggerty et al. Feb 2013 B2
8376233 Horn et al. Feb 2013 B2
8381979 Franz Feb 2013 B2
8390909 Plesko Mar 2013 B2
8408464 Zhu et al. Apr 2013 B2
8408468 Van et al. Apr 2013 B2
8408469 Good Apr 2013 B2
8423363 Gupta Apr 2013 B2
8424768 Rueblinger et al. Apr 2013 B2
8448863 Xian et al. May 2013 B2
8457013 Essinger et al. Jun 2013 B2
8459557 Havens et al. Jun 2013 B2
8469272 Kearney Jun 2013 B2
8474712 Kearney et al. Jul 2013 B2
8479992 Kotlarsky et al. Jul 2013 B2
8490877 Kearney Jul 2013 B2
8517271 Kotlarsky et al. Aug 2013 B2
8521526 Lloyd Aug 2013 B1
8523076 Good Sep 2013 B2
8528818 Ehrhart et al. Sep 2013 B2
8544737 Gomez et al. Oct 2013 B2
8548420 Grunow et al. Oct 2013 B2
8550335 Samek et al. Oct 2013 B2
8550354 Gannon et al. Oct 2013 B2
8550357 Kearney Oct 2013 B2
8556174 Kosecki et al. Oct 2013 B2
8556176 Van et al. Oct 2013 B2
8556177 Hussey et al. Oct 2013 B2
8559767 Barber et al. Oct 2013 B2
8561895 Gomez et al. Oct 2013 B2
8561903 Sauerwein, Jr. Oct 2013 B2
8561905 Edmonds et al. Oct 2013 B2
8565107 Pease et al. Oct 2013 B2
8571307 Li et al. Oct 2013 B2
8579200 Samek et al. Nov 2013 B2
8583924 Caballero et al. Nov 2013 B2
8584945 Wang et al. Nov 2013 B2
8587595 Wang Nov 2013 B2
8587697 Hussey et al. Nov 2013 B2
8588869 Sauerwein et al. Nov 2013 B2
8590789 Nahill et al. Nov 2013 B2
8596539 Havens et al. Dec 2013 B2
8596542 Havens et al. Dec 2013 B2
8596543 Havens et al. Dec 2013 B2
8599271 Havens et al. Dec 2013 B2
8599957 Peake et al. Dec 2013 B2
8600158 Li et al. Dec 2013 B2
8600167 Showering Dec 2013 B2
8602309 Longacre et al. Dec 2013 B2
8606585 Melamed Dec 2013 B2
8608053 Meier et al. Dec 2013 B2
8608071 Liu et al. Dec 2013 B2
8611309 Wang et al. Dec 2013 B2
8615487 Gomez et al. Dec 2013 B2
8621123 Caballero Dec 2013 B2
8622303 Meier et al. Jan 2014 B2
8628013 Ding Jan 2014 B2
8628015 Wang et al. Jan 2014 B2
8628016 Winegar Jan 2014 B2
8629926 Wang Jan 2014 B2
8630491 Longacre et al. Jan 2014 B2
8635309 Berthiaume et al. Jan 2014 B2
8636200 Kearney Jan 2014 B2
8636212 Nahill et al. Jan 2014 B2
8636215 Ding et al. Jan 2014 B2
8636224 Wang Jan 2014 B2
8638806 Wang et al. Jan 2014 B2
8640958 Lu et al. Feb 2014 B2
8640960 Wang et al. Feb 2014 B2
8643717 Li et al. Feb 2014 B2
8645825 Cornea et al. Feb 2014 B1
8646692 Meier et al. Feb 2014 B2
8646694 Wang et al. Feb 2014 B2
8657200 Ren et al. Feb 2014 B2
8659397 Vargo et al. Feb 2014 B2
8668149 Good Mar 2014 B2
8678285 Kearney Mar 2014 B2
8678286 Smith et al. Mar 2014 B2
8682077 Longacre, Jr. Mar 2014 B1
D702237 Oberpriller et al. Apr 2014 S
8687282 Feng et al. Apr 2014 B2
8692927 Pease et al. Apr 2014 B2
8695880 Bremer et al. Apr 2014 B2
8698949 Grunow et al. Apr 2014 B2
8702000 Barber et al. Apr 2014 B2
8717494 Gannon May 2014 B2
8720783 Biss et al. May 2014 B2
8723804 Fletcher et al. May 2014 B2
8723904 Marty et al. May 2014 B2
8727223 Wang May 2014 B2
8740082 Wilz, Sr. Jun 2014 B2
8740085 Furlong et al. Jun 2014 B2
8746563 Hennick et al. Jun 2014 B2
8750445 Peake et al. Jun 2014 B2
8752766 Xian et al. Jun 2014 B2
8756059 Braho et al. Jun 2014 B2
8757495 Qu et al. Jun 2014 B2
8760563 Koziol et al. Jun 2014 B2
8763909 Reed et al. Jul 2014 B2
8777108 Coyle Jul 2014 B2
8777109 Oberpriller et al. Jul 2014 B2
8779898 Havens et al. Jul 2014 B2
8781520 Payne et al. Jul 2014 B2
8783573 Havens et al. Jul 2014 B2
8789757 Barten Jul 2014 B2
8789758 Hawley et al. Jul 2014 B2
8789759 Xian et al. Jul 2014 B2
8794520 Wang et al. Aug 2014 B2
8794522 Ehrhart Aug 2014 B2
8794525 Amundsen et al. Aug 2014 B2
8794526 Wang et al. Aug 2014 B2
8798367 Ellis Aug 2014 B2
8807431 Wang et al. Aug 2014 B2
8807432 Van et al. Aug 2014 B2
8820630 Qu et al. Sep 2014 B2
8822848 Meagher Sep 2014 B2
8824692 Sheerin et al. Sep 2014 B2
8824696 Braho Sep 2014 B2
8842849 Wahl et al. Sep 2014 B2
8844822 Kotlarsky et al. Sep 2014 B2
8844823 Fritz et al. Sep 2014 B2
8849019 Li et al. Sep 2014 B2
D716285 Chaney et al. Oct 2014 S
8851383 Yeakley et al. Oct 2014 B2
8854633 Laffargue et al. Oct 2014 B2
8866963 Grunow et al. Oct 2014 B2
8868421 Braho et al. Oct 2014 B2
8868519 Maloy et al. Oct 2014 B2
8868802 Barten Oct 2014 B2
8868803 Caballero Oct 2014 B2
8870074 Gannon Oct 2014 B1
8879639 Sauerwein, Jr. Nov 2014 B2
8880426 Smith Nov 2014 B2
8881983 Havens et al. Nov 2014 B2
8881987 Wang Nov 2014 B2
8903172 Smith Dec 2014 B2
8908995 Benos et al. Dec 2014 B2
8910870 Li et al. Dec 2014 B2
8910875 Ren et al. Dec 2014 B2
8914290 Hendrickson et al. Dec 2014 B2
8914788 Pettinelli et al. Dec 2014 B2
8915439 Feng et al. Dec 2014 B2
8915444 Havens et al. Dec 2014 B2
8916789 Woodburn Dec 2014 B2
8918250 Hollifield Dec 2014 B2
8918564 Caballero Dec 2014 B2
8925818 Kosecki et al. Jan 2015 B2
8939374 Jovanovski et al. Jan 2015 B2
8942480 Ellis Jan 2015 B2
8944313 Williams et al. Feb 2015 B2
8944327 Meier et al. Feb 2015 B2
8944332 Harding et al. Feb 2015 B2
8950678 Germaine et al. Feb 2015 B2
D723560 Zhou et al. Mar 2015 S
8967468 Gomez et al. Mar 2015 B2
8971346 Sevier Mar 2015 B2
8976030 Cunningham et al. Mar 2015 B2
8976368 El et al. Mar 2015 B2
8978981 Guan Mar 2015 B2
8978983 Bremer et al. Mar 2015 B2
8978984 Hennick et al. Mar 2015 B2
8985456 Zhu et al. Mar 2015 B2
8985457 Soule et al. Mar 2015 B2
8985459 Kearney et al. Mar 2015 B2
8985461 Gelay et al. Mar 2015 B2
8988578 Showering Mar 2015 B2
8988590 Gillet et al. Mar 2015 B2
8991704 Hopper et al. Mar 2015 B2
8996194 Davis et al. Mar 2015 B2
8996384 Funyak et al. Mar 2015 B2
8998091 Edmonds et al. Apr 2015 B2
9002641 Showering Apr 2015 B2
9007368 Laffargue et al. Apr 2015 B2
9010641 Qu et al. Apr 2015 B2
9015513 Murawski et al. Apr 2015 B2
9016576 Brady et al. Apr 2015 B2
D730357 Fitch et al. May 2015 S
9022288 Nahill et al. May 2015 B2
9030964 Essinger et al. May 2015 B2
9033240 Smith et al. May 2015 B2
9033242 Gillet et al. May 2015 B2
9036054 Koziol et al. May 2015 B2
9037344 Chamberlin May 2015 B2
9038911 Xian et al. May 2015 B2
9038915 Smith May 2015 B2
D730901 Oberpriller et al. Jun 2015 S
D730902 Fitch et al. Jun 2015 S
D733112 Chaney et al. Jun 2015 S
9047098 Barten Jun 2015 B2
9047359 Caballero et al. Jun 2015 B2
9047420 Caballero Jun 2015 B2
9047525 Barber et al. Jun 2015 B2
9047531 Showering et al. Jun 2015 B2
9049640 Wang et al. Jun 2015 B2
9053055 Caballero Jun 2015 B2
9053378 Hou et al. Jun 2015 B1
9053380 Xian et al. Jun 2015 B2
9057641 Amundsen et al. Jun 2015 B2
9058526 Powilleit Jun 2015 B2
9064165 Havens et al. Jun 2015 B2
9064167 Xian et al. Jun 2015 B2
9064168 Todeschini et al. Jun 2015 B2
9064254 Todeschini et al. Jun 2015 B2
9066032 Wang Jun 2015 B2
9070032 Corcoran Jun 2015 B2
D734339 Zhou et al. Jul 2015 S
D734751 Oberpriller et al. Jul 2015 S
9080856 Laffargue Jul 2015 B2
9082023 Feng et al. Jul 2015 B2
9084032 Rautiola Jul 2015 B2
9224022 Ackley et al. Dec 2015 B2
9224027 Van et al. Dec 2015 B2
D747321 London et al. Jan 2016 S
9230140 Ackley Jan 2016 B1
9250712 Todeschini Feb 2016 B1
9258033 Showering Feb 2016 B2
9262403 Christ Feb 2016 B2
9262633 Todeschini et al. Feb 2016 B1
9310609 Rueblinger et al. Apr 2016 B2
9317605 Zivkovic Apr 2016 B1
D757009 Oberpriller et al. May 2016 S
9342724 McCloskey et al. May 2016 B2
9375945 Bowles Jun 2016 B1
D760719 Zhou et al. Jul 2016 S
9390596 Todeschini Jul 2016 B1
D762604 Fitch et al. Aug 2016 S
D762647 Fitch et al. Aug 2016 S
9412242 Van et al. Aug 2016 B2
D766244 Zhou et al. Sep 2016 S
9443123 Hejl Sep 2016 B2
9443222 Singel et al. Sep 2016 B2
9478113 Xie et al. Oct 2016 B2
9640181 Parkinson May 2017 B2
9697281 Palmon Jul 2017 B1
20040039988 Lee Feb 2004 A1
20040243406 Rinscheid Dec 2004 A1
20050159949 Yu Jul 2005 A1
20050192801 Lewis Sep 2005 A1
20070063048 Havens et al. Mar 2007 A1
20070208567 Amento Sep 2007 A1
20080120102 Rao May 2008 A1
20090134221 Zhu et al. May 2009 A1
20100177076 Essinger et al. Jul 2010 A1
20100177080 Essinger et al. Jul 2010 A1
20100177707 Essinger et al. Jul 2010 A1
20100177749 Essinger et al. Jul 2010 A1
20100179811 Gupta Jul 2010 A1
20100265880 Rautiola Oct 2010 A1
20110169999 Grunow et al. Jul 2011 A1
20110184719 Christ Jul 2011 A1
20110202554 Powilleit et al. Aug 2011 A1
20120111946 Golant May 2012 A1
20120168512 Kotlarsky et al. Jul 2012 A1
20120193423 Samek Aug 2012 A1
20120203647 Smith Aug 2012 A1
20120223141 Good et al. Sep 2012 A1
20130043312 Van Horn Feb 2013 A1
20130075168 Amundsen et al. Mar 2013 A1
20130175341 Kearney et al. Jul 2013 A1
20130175343 Good Jul 2013 A1
20130257744 Daghigh et al. Oct 2013 A1
20130257759 Daghigh Oct 2013 A1
20130270346 Xian et al. Oct 2013 A1
20130287258 Kearney Oct 2013 A1
20130292475 Kotlarsky et al. Nov 2013 A1
20130292477 Hennick et al. Nov 2013 A1
20130293539 Hunt et al. Nov 2013 A1
20130293540 Laffargue et al. Nov 2013 A1
20130306728 Thuries et al. Nov 2013 A1
20130306731 Pedrao Nov 2013 A1
20130307964 Bremer et al. Nov 2013 A1
20130308625 Park et al. Nov 2013 A1
20130313324 Koziol et al. Nov 2013 A1
20130313325 Wilz et al. Nov 2013 A1
20130342717 Havens et al. Dec 2013 A1
20140001267 Giordano et al. Jan 2014 A1
20140002828 Laffargue et al. Jan 2014 A1
20140008439 Wang Jan 2014 A1
20140025584 Liu et al. Jan 2014 A1
20140034734 Sauerwein, Jr. Feb 2014 A1
20140036848 Pease et al. Feb 2014 A1
20140039693 Havens et al. Feb 2014 A1
20140042814 Kather et al. Feb 2014 A1
20140049120 Kohtz et al. Feb 2014 A1
20140049635 Laffargue et al. Feb 2014 A1
20140061306 Wu et al. Mar 2014 A1
20140063289 Hussey et al. Mar 2014 A1
20140066136 Sauerwein et al. Mar 2014 A1
20140067692 Ye et al. Mar 2014 A1
20140070005 Nahill et al. Mar 2014 A1
20140071840 Venancio Mar 2014 A1
20140074746 Wang Mar 2014 A1
20140076974 Havens et al. Mar 2014 A1
20140078341 Havens et al. Mar 2014 A1
20140078342 Li et al. Mar 2014 A1
20140078345 Showering Mar 2014 A1
20140098792 Wang et al. Apr 2014 A1
20140100774 Showering Apr 2014 A1
20140100813 Showering Apr 2014 A1
20140103115 Meier et al. Apr 2014 A1
20140104413 McCloskey et al. Apr 2014 A1
20140104414 McCloskey et al. Apr 2014 A1
20140104416 Giordano et al. Apr 2014 A1
20140104451 Todeschini et al. Apr 2014 A1
20140106594 Skvoretz Apr 2014 A1
20140106725 Sauerwein, Jr. Apr 2014 A1
20140108010 Maltseff et al. Apr 2014 A1
20140108402 Gomez et al. Apr 2014 A1
20140108682 Caballero Apr 2014 A1
20140110485 Toa et al. Apr 2014 A1
20140114530 Fitch et al. Apr 2014 A1
20140124577 Wang et al. May 2014 A1
20140124579 Ding May 2014 A1
20140125842 Winegar May 2014 A1
20140125853 Wang May 2014 A1
20140125999 Longacre et al. May 2014 A1
20140129378 Richardson May 2014 A1
20140131438 Kearney May 2014 A1
20140131441 Nahill et al. May 2014 A1
20140131443 Smith May 2014 A1
20140131444 Wang May 2014 A1
20140131445 Ding et al. May 2014 A1
20140131448 Xian et al. May 2014 A1
20140133379 Wang et al. May 2014 A1
20140136208 Maltseff et al. May 2014 A1
20140140585 Wang May 2014 A1
20140151453 Meier et al. Jun 2014 A1
20140152882 Samek et al. Jun 2014 A1
20140158770 Sevier et al. Jun 2014 A1
20140159869 Zumsteg et al. Jun 2014 A1
20140166755 Liu et al. Jun 2014 A1
20140166757 Smith Jun 2014 A1
20140166759 Liu et al. Jun 2014 A1
20140168787 Wang et al. Jun 2014 A1
20140175165 Havens et al. Jun 2014 A1
20140175172 Jovanovski et al. Jun 2014 A1
20140191644 Chaney Jul 2014 A1
20140191913 Ge et al. Jul 2014 A1
20140197238 Liu et al. Jul 2014 A1
20140197239 Havens et al. Jul 2014 A1
20140197304 Feng et al. Jul 2014 A1
20140203087 Smith et al. Jul 2014 A1
20140204268 Grunow et al. Jul 2014 A1
20140214631 Hansen Jul 2014 A1
20140217166 Berthiaume et al. Aug 2014 A1
20140217180 Liu Aug 2014 A1
20140231500 Ehrhart et al. Aug 2014 A1
20140232930 Anderson Aug 2014 A1
20140247315 Marty et al. Sep 2014 A1
20140263493 Amurgis et al. Sep 2014 A1
20140263645 Smith et al. Sep 2014 A1
20140267609 Laffargue Sep 2014 A1
20140270196 Braho et al. Sep 2014 A1
20140270229 Braho Sep 2014 A1
20140278387 Digregorio Sep 2014 A1
20140282210 Bianconi Sep 2014 A1
20140284384 Lu et al. Sep 2014 A1
20140288933 Braho et al. Sep 2014 A1
20140297058 Barker et al. Oct 2014 A1
20140299665 Barber et al. Oct 2014 A1
20140312121 Lu et al. Oct 2014 A1
20140319220 Coyle Oct 2014 A1
20140319221 Oberpriller et al. Oct 2014 A1
20140326787 Barten Nov 2014 A1
20140332590 Wang et al. Nov 2014 A1
20140344943 Todeschini et al. Nov 2014 A1
20140346233 Liu et al. Nov 2014 A1
20140351317 Smith et al. Nov 2014 A1
20140353373 Van et al. Dec 2014 A1
20140361073 Qu et al. Dec 2014 A1
20140361082 Xian et al. Dec 2014 A1
20140362184 Jovanovski et al. Dec 2014 A1
20140363015 Braho Dec 2014 A1
20140369511 Sheerin et al. Dec 2014 A1
20140374483 Lu Dec 2014 A1
20140374485 Xian et al. Dec 2014 A1
20150001301 Ouyang Jan 2015 A1
20150001304 Todeschini Jan 2015 A1
20150003673 Fletcher Jan 2015 A1
20150009338 Laffargue et al. Jan 2015 A1
20150009610 London et al. Jan 2015 A1
20150014416 Kotlarsky et al. Jan 2015 A1
20150021397 Rueblinger et al. Jan 2015 A1
20150028102 Ren et al. Jan 2015 A1
20150028103 Jiang Jan 2015 A1
20150028104 Ma et al. Jan 2015 A1
20150029002 Yeakley et al. Jan 2015 A1
20150032709 Maloy et al. Jan 2015 A1
20150039309 Braho et al. Feb 2015 A1
20150040378 Saber et al. Feb 2015 A1
20150048168 Fritz et al. Feb 2015 A1
20150049347 Laffargue et al. Feb 2015 A1
20150051992 Smith Feb 2015 A1
20150053766 Havens et al. Feb 2015 A1
20150053768 Wang et al. Feb 2015 A1
20150053769 Thuries et al. Feb 2015 A1
20150062366 Liu et al. Mar 2015 A1
20150063215 Wang Mar 2015 A1
20150063676 Lloyd et al. Mar 2015 A1
20150069130 Gannon Mar 2015 A1
20150071819 Todeschini Mar 2015 A1
20150083800 Li et al. Mar 2015 A1
20150086114 Todeschini Mar 2015 A1
20150088522 Hendrickson et al. Mar 2015 A1
20150096872 Woodburn Apr 2015 A1
20150099557 Pettinelli et al. Apr 2015 A1
20150100196 Hollifield Apr 2015 A1
20150102109 Huck Apr 2015 A1
20150115035 Meier et al. Apr 2015 A1
20150127791 Kosecki et al. May 2015 A1
20150128116 Chen et al. May 2015 A1
20150129659 Feng et al. May 2015 A1
20150133047 Smith et al. May 2015 A1
20150134470 Hejl et al. May 2015 A1
20150136851 Harding et al. May 2015 A1
20150136854 Lu et al. May 2015 A1
20150142492 Kumar May 2015 A1
20150144692 Hejl May 2015 A1
20150144698 Teng et al. May 2015 A1
20150144701 Xian et al. May 2015 A1
20150149946 Benos et al. May 2015 A1
20150161429 Xian Jun 2015 A1
20150169925 Chen et al. Jun 2015 A1
20150169929 Williams et al. Jun 2015 A1
20150186703 Chen et al. Jul 2015 A1
20150187355 Parkinson Jul 2015 A1
20150193644 Kearney et al. Jul 2015 A1
20150193645 Colavito et al. Jul 2015 A1
20150199957 Funyak et al. Jul 2015 A1
20150204671 Showering Jul 2015 A1
20150210199 Payne Jul 2015 A1
20150220753 Zhu et al. Aug 2015 A1
20150254485 Feng et al. Sep 2015 A1
20150327012 Bian et al. Nov 2015 A1
20160014251 Hejl Jan 2016 A1
20160040982 Li et al. Feb 2016 A1
20160042241 Todeschini Feb 2016 A1
20160057230 Todeschini et al. Feb 2016 A1
20160109219 Ackley et al. Apr 2016 A1
20160109220 Laffargue et al. Apr 2016 A1
20160109224 Thuries et al. Apr 2016 A1
20160112631 Ackley et al. Apr 2016 A1
20160112643 Laffargue et al. Apr 2016 A1
20160124516 Schoon et al. May 2016 A1
20160125217 Todeschini May 2016 A1
20160125342 Miller et al. May 2016 A1
20160125873 Braho et al. May 2016 A1
20160133253 Braho et al. May 2016 A1
20160171720 Todeschini Jun 2016 A1
20160178479 Goldsmith Jun 2016 A1
20160180678 Ackley et al. Jun 2016 A1
20160189087 Morton et al. Jun 2016 A1
20160227912 Oberpriller et al. Aug 2016 A1
20160232891 Pecorari Aug 2016 A1
20160292477 Bidwell Oct 2016 A1
20160294779 Yeakley et al. Oct 2016 A1
20160306769 Kohtz et al. Oct 2016 A1
20160314276 Wilz et al. Oct 2016 A1
20160314294 Kubler et al. Oct 2016 A1
20170053647 Nichols Feb 2017 A1
Foreign Referenced Citations (4)
Number Date Country
2013163789 Nov 2013 WO
2013173985 Nov 2013 WO
2014019130 Feb 2014 WO
2014110495 Jul 2014 WO
Non-Patent Literature Citations (37)
Entry
U.S. Appl. No. 29/530,600 for Cyclone filed Jun. 18, 2015 (Vargo et al); 16 pages.
U.S. Appl. No. 29/529,441 for Indicia Reading Device filed Jun. 8, 2015 (Zhou et al.); 14 pages.
U.S. Appl. No. 29/528,890 for Mobile Computer Housing filed Jun. 2, 2015 (Fitch et al.); 61 pages.
U.S. Appl. No. 29/526,918 Chargine Base filed May 14, 2015 (Fitch et al.); 10 pages.
U.S. Appl. No. 29/525,068 for Tablet Computer With Removable Scanning Device filed Apr. 27, 2015 (Schulte et al.); 19 pages.
U.S. Appl. No. 29/523,098 for Handle for a Tablet Computer filed Apr. 7, 2015 (Bidwell et al.); 17 pages.
U.S. Appl. No. 29/516,892 for Table Computer filed Feb. 6, 2015 (Bidwell et al.); 13 pages.
U.S. Appl. No. 29/468,118 for an Electronic Device Case, filed Sep. 26, 2013 (Oberpriller et al.); 44 pages.
U.S. Appl. No. 14/446,391 for Multifunction Point of Sale Apparatus With Optical Signature Capture filed Jul. 30, 2014 (Good et al.); 37 pages; now abandoned.
U.S. Appl. No. 14/277,337 for Multipurpose Optical Reader, filed May 14, 2014 (Jovanovski et al.); 59 pages; now abandoned.
U.S. Patent Application for Tracking Batiery Conditions filed May 4, 2015 (Young et al.); 70 pages, U.S. Appl. No. 14/702,979.
U.S. Patent Application for Terminal Having Illumination and Focus Control filed May 21, 2014 (Liu et al.); 31 pages; now abandoned, U.S. Appl. No. 14/283,282.
U.S. Patent Application for Tactile Switch for a Mobile Electronic Device filed Jun. 16, 2015 (Bamdringa); 38 pages, U.S. Appl. No. 14/740,320.
U.S. Patent Application for System and Method for Regulating Barcode Data Injection Into a Running Application on a Smart Device filed May 1, 2015 (Todeschini et al.); 38 pages, U.S. Appl. No. 14/702,110.
U.S. Patent Application for Portable Electronic Devices Having a Separate Location Trigger Unit for Use in Controlling an Application Unit filed Nov. 3, 2014, Bian et al., 18 pages, U.S. Appl. No. 14/398,542.
U.S. Patent Application for Optical Pattern Projector filed Jun. 23, 2015 (Thuries et al.); 33 pages, U.S. Appl. No. 14/747,197.
U.S. Patent Application for Method and System to Protect Software-Based Network-Connected Devices From Advanced Persistent Threat filed May 6, 2015 (Hussey et al.); 42 pages, U.S. Appl. No. 14/705,407.
U.S. Patent Application for Intermediate Linear Positioning filed May 5, 2015 (Charpentier et al); 60 pages, U.S. Appl. No. 14/704,050.
U.S. Patent Application for Indicia-Reading Systems Having an Interface With a User'S Nervous System filed Jun. 10, 2015 (Todeschini); 39 pages, U.S. Appl. No. 14/735,717.
U.S. Patent Application for Hands-Free Human Machine Interface Responsive to a Driver of a Vehicle filed May 6, 2015 (Fitch et al.); 44 pages, U.S. Appl. No. 14/705,012.
U.S. Patent Application for Hand-Mounted Indicia-Reading Device With Finger Motion Triggering filed Apr. 1, 2014, Van Horn et al., 29 pages, U.S. Appl. No. 14/231,898.
U.S. Patent Application for Evaluating Image Values filed May 19, 2015 (Ackley); 60 pages, U.S. Appl. No. 14/715,916.
U.S. Patent Application for Dual-Projector Three-Dimensional Scanner filed Jun. 23, 2015 (Jovanovski et al.); 40 pages, U.S. Appl. No. 14/747,490.
U.S. Patent Application for Calibrating a vol. Dimensioner filed Jun. 16, 2015 (Ackley et al.); 63 pages, U.S. Appl. No. 14/740,373.
U.S. Patent Application for Barcode Reader With Security Features filed Oct. 31, 2014, Todeschini et al., 28 pages, U.S. Appl. No. 14/529,857.
U.S. Patent Application for Augumented Reality Enabled Hazard Display filed May 19, 2015 (Venkatesha et al.); 35 pages, U.S. Appl. No. 14/715,672.
U.S. Patent Application for Application Independent DEX/UCS Interface filed May 8, 2015 (Page); 47 pages, U.S. Appl. No. 14/707,123.
U.S. Patent Application for Media Gate for Thermal Transfer Printers filed Dec. 23, 2014, Jason Dean Lewis Bowles, 32 pages, U.S. Appl. No. 14/580,262.
U.S. Appl. No. 13/367,978, filed Feb. 7, 2012, (Feng et al.); now abandoned.
Office Action from U.S. Appl. No. 15/233,992, dated Sep. 19, 2017, 9 pages.
Office Action from U.S. Appl. No. 15/233,992, dated May 18, 2018, 12 pages.
Office Action from U.S. Appl. No. 15/233,992, dated Jul. 25, 2018, 12 pages.
Notice of Allowance from U.S. Appl. No. 15/233,992, dated May 1, 2019, 12 pages.
Examination Report for GB Patent Application No. 1613949.5, dated Dec. 12, 2018, 4 pages.
Combined Search and Examination Report in counterpart GB Application No. 1613949.5 dated Jan. 30, 2017, pp. 1-6.
U.S. Appl. No. 15/233,992, filed Aug. 11, 2016, Pending.
Office Action for British Application No. 1903587.2, dated Aug. 15, 2019, 8 pages.
Related Publications (1)
Number Date Country
20190237077 A1 Aug 2019 US
Provisional Applications (1)
Number Date Country
62206884 Aug 2015 US
Continuations (1)
Number Date Country
Parent 15233992 Aug 2016 US
Child 16379285 US