The present invention generally relates to a microphone for receiving a verbal utterances from a user's mouth and, more particularly, to automated systems and methods for determining the proximity of the microphone to the user's mouth.
An example of an advantage provided by speech recognition equipment is that a person can use speech recognition equipment to verbally communicate with a computer in a hands-free manner. For example, a person may verbally communicate with the computer by way of a microphone that is part of a headset, or the like. A factor in the accuracy of such communicating can be the position of the microphone relative to the user's mouth. For example, best results may be achieved when the microphone is positioned in an optimal position relative to the user's mouth. However, there can be a wide variety of reasons why a user does not position the microphone in the optimal position, such as the user's inexperience or forgetfulness, the optimal position varying in response to different environmental noises or different equipment setups, or the like.
Therefore, there is a need for a system and method for automatically determining the approximate position of a microphone relative to a user's mouth, for example in real time, so that the determined position may be considered in determining whether corrective positional adjustments to the microphone may increase the functionality of the speech recognition equipment, and the determined position may be considered in performance metrics (e.g., analysis of speech recognition performance).
In one aspect, the present invention embraces a method for determining a relative position of a microphone, the method comprising: capturing speech audio from a user's mouth with the microphone so that the microphone outputs an electrical signal indicative of the speech audio; determining an indication of a position of the microphone relative to the user's mouth, comprising providing a plurality of inputs to a computerized discriminative classifier, wherein an input of the plurality of inputs is derived from the electrical signal, and wherein an output from the computerized discriminative classifier is indicative of the position of the microphone relative to the user's mouth.
In an embodiment, the method comprises a computer determining whether the determined indication of the position of the microphone is unacceptable; and the computer providing a signal in response to the computer determining that the determined indication of the position of the microphone is unacceptable.
In an embodiment, the method comprises a computer deriving the input from the electrical signal.
In an embodiment, the method comprises calculating a Fourier transformation on data selected from the group consisting of the electrical signal and data derived from the electrical signal.
In an embodiment, the input comprises results from the calculating of the Fourier transformation.
In an embodiment, the input is derived from results from the calculating of the Fourier transformation.
In an embodiment, the method comprises decoding a phoneme from data selected from the group consisting of the electrical signal and data derived from the electrical signal.
In an embodiment, the input comprises the phoneme, and the decoding of the phoneme is comprised of using a text-to-phoneme engine.
In an embodiment, the method comprises deriving first and second inputs of the plurality of inputs from the electrical signal; and weighting the first input more heavily than any weighting of the second input in the computerized discriminative classifier.
In an embodiment, the method comprises providing first and second phenomes that are different from one another, comprising performing text-to-phenome conversions, wherein the first input comprises the first phenome, and wherein the second input comprises the second phenome.
In another aspect, the present invention embraces a method for determining a relative position of a microphone, the method comprising: providing a plurality of inputs to a discriminative classifier implemented on a computer, the plurality of inputs comprising data selected from the group consisting of an electrical signal output from the microphone in response to the microphone capturing speech audio from a user's mouth while the microphone is at a position relative to the user's mouth, and data derived from the electrical signal; the computer receiving an output from the discriminative classifier, the output providing an indication of the position of the microphone relative to the user's mouth; and the computer determining whether the indicated position of the microphone is unacceptable, and providing a signal if the indicated position of the microphone is unacceptable.
In an embodiment, the microphone is part of a head set that comprises a speaker, and the method comprises the speaker providing an audio indication that the position of the microphone is unacceptable, wherein the speaker providing the audio indication is in response to the computer providing the signal.
In an embodiment, the method comprises deriving the input from the electrical signal, wherein the input is selected from the group consisting of a Fourier transform and a phenome.
In another aspect, the present invention embraces a method for determining a relative position of a microphone, the method comprising: capturing speech audio from a user's mouth with the microphone so that the microphone outputs an electrical signal indicative of the speech audio; a computer deriving a plurality of inputs from the electrical signal; determining an indication of a position of the microphone relative to the user's mouth, comprising providing at least the plurality of inputs to a discriminative classifier implemented on the computer; the computer receiving an output from the discriminative classifier, the output providing an indication of the position of the microphone relative to the user's mouth; and the computer determining whether the indicated position of the microphone is unacceptable, and providing a signal if the indicated position of the microphone is unacceptable.
In an embodiment, the method comprises the computer calculating a Fourier transformation on data selected from the group consisting of the electrical signal and data derived from the electrical signal, wherein an input of the plurality of inputs comprises results from the calculating of the Fourier transformation.
In an embodiment, the method comprises the computer calculating a Fourier transformation on data selected from the group consisting of the electrical signal and data derived from the electrical signal, wherein an input of the plurality of inputs is derived from results from the calculating of the Fourier transformation.
In an embodiment, the method comprises the computer decoding a phoneme from data selected from the group consisting of the electrical signal and data derived from the electrical signal, wherein an input of the plurality of inputs comprises the phoneme.
In an embodiment, the method comprises the computer decoding the phoneme using a text-to-phoneme engine.
In an embodiment, the method comprises deriving first and second inputs of the plurality of inputs from the electrical signal; and weighting the first input more heavily than any weighting of the second input in the discriminative classifier.
In an embodiment, the method comprises providing first and second phenomes that are different from one another, comprising performing text-to-phenome conversions, wherein the first input comprises the first phenome, and the second input comprises the second phenome.
The foregoing illustrative summary, as well as other exemplary objectives and/or advantages of the invention, and the manner in which the same are accomplished, are further explained within the following detailed description and its accompanying drawings.
The present invention is generally directed to systems and methods for automatically determining a position of a microphone relative to a user's mouth, so that the determined position may be considered in determining whether corrective positional adjustments to the microphone may increase the functionality of an associated speech recognition module. In addition or alternatively, the determined position may be considered in performance metrics (e.g., analysis of speech recognition performance). In an embodiment of this disclosure, such a system for automatically determining the approximate position of the microphone can be part of a larger system that can include a mobile device, and the mobile device can be a headset assembly that includes the microphone. The mobile device, or headset assembly, can be associated with a voice recognition module configured for allowing the mobile device to be used in a hands-free manner. Alternatively, the mobile device can be manually carried or mounted to a movable piece of equipment, such as a cart being used by a worker.
In
In the embodiment shown in
The position of the main microphone 25 relative to the user's mouth may be adjustable, such as by adjusting the position of the headset frame or headband 17 relative to the user's head. For example, in one embodiment, the main microphone 25 can be fixed in position relative to the headset frame or headband 17, so that during positional adjustments of the headset frame or headband the main microphone moves with the headset frame or headband relative to the user's head and, thus, relative to the user's mouth. In contrast or addition, as schematically shown with dashed lines in
The electronics module 12 of the headset assembly 10 can contain or otherwise carry several components of the headset assembly to reduce the weight and/or size of the headset 15. In some embodiments, the electronics module 12 can include one or more of a rechargeable or long life battery, keypad, antenna (e.g., Bluetooth® antenna), printed circuit board assembly, and any other suitable electronics, or the like, as discussed in greater detail below. The electronics module 12 can be releasably mounted to a user's torso or in any other suitable location for being carried by the user, typically in a hands-free manner. The electronics module 12 can utilize a user-configurable fastener or attachment feature 28, such as a belt clip, lapel clip, loop, lanyard and/or other suitable features, for at least partially facilitating attachment of the electronics module to the user. The headset 15 can be connected to the electronics module 12 via a communication link, such as a small audio cable 30 or a wireless link.
For example and not for the purpose of limiting the scope of this disclosure, the headset 10 can be used to support multiple workflows in multiple markets, including grocery retail, direct store delivery, wholesale, etc. In some embodiments, the headset 10 has a low profile that seeks not to be intimidating to a customer in a retail setting. That is, the headset 15 can be relatively minimalistic in appearance in some embodiments, or alternatively the headset 15 can have a larger profile in other embodiments. The electronics module 12 can be used with a wide variety of differently configured headsets, such as Vocollect™ headsets.
The electronics module 12 can be configured to read a unique identifier (I.D.) of the headset 15. The headset I.D. can be stored in an electronic circuitry package that is part of the headset 15, and the headset electronic circuitry package can be configured to at least partially provide the connection (e.g., communication path(s)) between the electronics module 12 and headset features (e.g., the one or more speakers 20 and microphones 25, 26). In one embodiment, the audio cable 30 includes multiple conductors or communication lines, such as for providing audio signals from the electronics module 12 to the headset 15 (i.e. the speakers 20), and providing audio signals from the headset (i.e., the microphones 25, 26) to the electronics module. When a wireless communications link between the headset 15 and electronics module 12 is used, such as a wireless local area network (e.g., a Bluetooth® type of communication link), the headset 15 can include a small lightweight battery and other suitable features. The wireless communication link can provide wireless signals suitable for exchanging voice communications. In an embodiment (not shown), the electronics module 12 can be integrated into the headset 15 rather than being remote from, and connected to, the headset 15. Accordingly, the mobile device, which may more specifically be in the form of the headset assembly 10, or the like, may include multiple pieces with separate housings or can be substantially contained in, or otherwise be associated with, a single housing.
In the embodiment schematically shown in
As indicated above and discussed in greater detail below, the electronics module 12 can contain or otherwise carry several components (e.g., software, firmware and/or hardware) of the headset assembly 10. In this regard, the housing or frame of the electronics module 12 is schematically represented by an outer block in
The computer 42 can be one or more computers, such as a series of computers connected to one another in a wired and/or wireless manner over a network, such as a wireless local area network, to form a distributed computer system. More generally, throughout this document any reference to an article (e.g., computer 42) encompasses one or more of that article, unless indicated otherwise. As a specific example, and not for the purpose of limiting the scope of this disclosure, the computer 42 can comprise a retail store computer having applications and data for managing operations of the retail store (e.g., an enterprise system, such as a retail management system, inventory management system or the like), including inventory control and other functions, such as point of sale functions.
In an embodiment, the computer 42 is configured to simultaneously interface with multiple of the headset assemblies 10, and thereby the users respectively associated with the headset assemblies, to simultaneously provide one or more work tasks or workflows that can be related to products or other items being handled by the users (e.g., workers) in a workplace (e.g., a retail store, warehouse, restaurant, or the like). The computer 42 can be located at one facility or be distributed at geographically distinct facilities. Furthermore, the computer 42 may include a proxy server. Therefore, the computer 42 is not limited in scope to a specific configuration. For example, and alternatively, each of the headset assemblies 10 can substantially be a standalone device, such that the computers 42 or suitable features thereof are part of the headset assemblies. Usually, however, to have sufficient database capability to simultaneously handle large amounts of information that can be associated with multiple headset assemblies 10 being operated simultaneously, the computer 42 typically comprises a server computer configured to simultaneously interface with multiple of the headset assemblies (e.g., mobile devices).
As alluded to above, the computer 42 can contain or otherwise carry several components (e.g., software, firmware and/or hardware). In this regard and as shown in
As alluded to above, the system 40 can be used in various speech-directed/speech-assisted work environments. Accordingly, the processor of the computer 42 can execute or run one or more speech recognition (e.g., speech-to-text) software modules and/or text-to-speech software modules, although one or more of these software modules may be executed on the electronics module 12 instead. More specifically, the computer 42 can include a speech recognition module, or more specifically a speech-to-text decoder, configured to transform electronic audio signals, which are generated by the main microphone 25 capturing speech audio from the user's mouth, into text data, or the like. For example, speech-to-text decoder can include voice templates that are stored in the computer 42 and configured to recognize user voice interactions and convert the interaction into text-based data. That text-based data can be utilized as information or instructions for interacting with at least one software application or module being executed on the computer 42. Both the above-discussed and the below-discussed functions ascribed to individual components of the system 40 can be performed in one or more other locations in further embodiments. For example, the computer 42 can perform voice recognition in one embodiment, or the electronics module 12 can perform voice recognition utilizing the voice templates. In one embodiment, the first stages of voice recognition can be performed on the electronics module 12, with further stages performed on the computer 42. In further embodiments, raw audio can be transmitted from the electronics module 12 to the computer 42 where the voice recognition is completed.
Functionality of (e.g., accuracy of the transforming performed by) the speech-to-text decoder of the system 40 may depend upon the position of the main microphone 25 relative to the user's mouth, wherein examples of a variety of different positions of the main microphone are shown in
An overall method of an embodiment of this disclosure can include a data collecting method 100 (
Generally described, the data collecting method 100 can be used to collect both a set of training data used in the training method 200, and a set of test data used in the testing method 400. The training method 200 can be used to create the discriminative classifier model using a discriminative classifier trainer (e.g. the computerized discriminative classifier trainer 305 of
Referring to
At block 105 of the data collecting method 100, the main microphone 25 of a headset 15 being worn by a user captures speech audio from the user's mouth, and the headset assembly 10 responsively provides an electrical signal indicative of the speech audio to the computer 42, wherein the electrical signal can be an electronic audio signal. At substantially the same time that the headset assembly 10 responsively provides the audio signal of block 105 (e.g., in real time), at block 110 the computer 42 can receive the audio signal from block 105. Also at or associated with block 110, the computer 42 can obtain or receive data for one or more contextual variables that may be useful as inputs for the discriminative classifier trainer and/or the discriminative classifier model, depending upon whether training or test data is being collected. The data for the one or more contextual variables associated with block 110 can be referred to as contextual data. The contextual variables may include one or more of the measured position of the main microphone 25 relative to the user's mouth (e.g., the actual, manually measured distance between the main microphone and the user's mouth), any gain setting of the system 40 (e.g., for increasing the power or amplitude of the electronic audio signal originating at block 105), a classification of the background noise (e.g., identification of the frequency content of the background noise) and/or any other suitable information. It is typical for the contextual data that is in possession of the computer 42 at block 110 to have originated at the same time as the occurrence of block 105, or otherwise be representative of conditions occurring at block 105. For example, the measured position of the main microphone 25 relative to the user's mouth, or more specifically the distance between the main microphone and the user's mouth, may be manually measured with a ruler or any other suitable device while the microphone is positioned as it was at the occurrence of block 105, and the measured distance may be input to the computer 42 by way of a suitable input device of the computer.
Processing control is transferred from block 110 to block 115. At block 115, the data (typically including at least the audio signal and the measured position of the main microphone 25 relative to the user's mouth) received at block 110 is identified as being part of a data unit and stored in at least one database (e.g., a relational database) of, or otherwise associated with, the computer 42. In the data unit created at block 115, the audio signal of block 110 may be identified as the main data of the data unit, and the data unit can further include metadata, and the metadata may comprise the measured position of the main microphone 25 relative to the user's mouth, and any other suitable contextual data. As mentioned above, the computer 42 may be in the form of a distributed computer system. Similarly, one or more databases associated with the computer 42 can be in the form of a distributed database system.
In one embodiment, the data collecting method 100 is repeated numerous times for numerous different users. For each user, the speech audio and/or one or more of the contextual variables (e.g., the measured position of the microphone 25 (
As shown in
The data units resulting from performance of the method 100 can be generally segregated into two groups or respectively stored in two databases of, or associated with, the computer 42. For example, a first group of the data units can be referred to as training data units that are used in the training method 200 of
In an embodiment described in the following, the training method 200 of
In one embodiment, the portions of the training method 200 represented by blocks 210, 215 and 220 can be further understood with reference to
Referring to
In an embodiment described in the following, the testing method 400 of
Further regarding the testing method 400 of
Referring to
In an embodiment described in the following, the position-checking method 500 of
Processing control is transferred from block 510 to block 515. At block 515, data is derived from the audio signal received at block 510, as discussed in greater detail below with reference to
Processing control is transferred from block 525 to block 530. At block 530, the discriminative classifier-derived value of the approximate position of the main microphone 25 from the user's mouth is compared to an acceptable value or an acceptable range. For example, as discussed above and as best understood with reference to
At block 535, the computer 42 can initiate and provide a signal to the headset assembly 10 by way of the communication path 44. As examples, the signal provided at block 535 can be an audio signal that is received by the one or more speakers 20 of the headset 15, so that the speakers provide an audio indication that the position of the main microphone 25 of the headset is unacceptable. More specifically and depending upon the determination made at block 530, the signal provided at block 535 to the one or more speakers 20 can be configured so that the speakers provide an audio indication that the main microphone 25 should be moved closer to, or farther away from the user's mouth, whichever the case may be. The position-checking method 500 may be looped through numerous times respectively for each of the testing data units, words, phonemes or the like, and the results of the determining associated with block 530 may be averaged or otherwise processed, so that any decision made at block 530 can be based upon an average or other suitable statistical analysis.
The position-checking method 500 of
Referring primarily to
Referring back to the position-checking method 500 of
Referring back to the one or more, or plurality, of inputs of
Using hints can comprise weighting predetermined words of the text input 325 more heavily than other words and/or weighting predetermined phenomes of the phenome input 330 more heavily than other phenomes (e.g., microphone placement may impact some words or phonemes more than others). Using hints can comprise weighting some words and/or phenomes higher than others when making the final classification (e.g., as “good” or “bad”) of the position of the microphone 25. Reiterating from above, the deriving of the phenome input 330 can comprise the computer 42 performing a text-to-phoneme decoding or conversion on the text input 325, such as with a text-to-phoneme engine or converter, and the computer 42 can be configured to provide, and the discriminative classifier features 305 can be configured to receive, one or more of such phenomes as inputs of the plurality of inputs. In addition, at least some of the phenomes can be weighted differently from one another. In accordance with one aspect of this disclosure, the hints can be used to assign different (e.g., higher) confidence to the discriminative classifier-derived values of the approximate position of the main microphone 25 from the user's mouth, which are received at block 525. For example, the one or more phenomes can comprise first and second phenomes that are different from one another, and the computer 42 and/or discriminative classifier features 305 may be configured to weight the first and second phenomes differently from one another in the above-described methods. As a more specific example, the first phenome can be weighted more heavily than the second phenome, so that, with all other inputs being equal, a first discriminative classifier-derived value of the approximate position of the main microphone 25 from the user's mouth received at block 525 for the first phenome is weighted more heavily than a second discriminative classifier-derived value of the approximate position of the main microphone 25 from the user's mouth received at block 525 for the second phenome, such as during the above-discussed averaging associated with block 530.
More generally, the computer 42 and/or discriminative classifier features 305 may be configured to weight other inputs differently from one another in the above-described methods. For example, it is believed that a first word can be weighted more heavily than a second word, so that, with all other inputs being equal, a first discriminative classifier-derived value of the approximate position of the main microphone 25 from the user's mouth received at block 525 for the first word is weighted more heavily than a second discriminative classifier-derived value of the approximate position of the main microphone 25 from the user's mouth received at block 525 for the second word, such as during the above-discussed averaging associated with block 530. As another example, some conversion from phoneme/number of phonemes may be mapped to a floating point number in a manner that seeks to optimize the classification (e.g., as “good” or “bad”) of the position of the microphone 25.
In one aspect of this disclosure, the supervised training method 200 (
Further regarding the one or more, or plurality, of inputs of
As a further example, a separate FFT can be calculated for each of the frames of each word, for each word the FFT can be saved for each frame, and for each word the FFT for the frames of the word can be averaged. Then for each word, the average FFT, input gain, word identifier, and the maximum audio or energy level for the word can be passed through the discriminative classifier model 305 to determine, or as as part of a method to determine, whether the position of the microphone 25 as “good” or “bad”, or the like. The classifying of the position of the microphone 25 is “good” or “bad”, or the like, can comprise subjecting the analysis to hysteresis in a manner that seeks to prevent the determination from quickly oscillating between determinations of “good” and “bad” in an undesirable manner. As another example, a historical database of selected words and their associated classifications (e.g., as “good” or “bad”, or the like) with respect to the position of the microphone 25 can be utilized in a manner that seeks to prevent the system 40 from repeatedly classifying the microphone position incorrectly for a given word just because the specific user differs from the discriminative classifier model 305 more for that word.
An aspect of this disclosure is the provision of a system for determining a relative position of a microphone. For example, the system may be configured for determining an indication of a position of a microphone relative to a user's mouth, wherein the microphone is configured to capture speech audio from the user's mouth, and output an electrical signal indicative of the speech audio. In a first example, the system comprises a computer, and the computer comprises a discriminative classifier and a speech recognition module, wherein the computer is configured to receive the electrical signal, wherein the discriminative classifier is configured to receive a plurality of inputs, and determine an indication of a position of the microphone relative to the user's mouth based upon the plurality of inputs, and wherein the computer is configured so that an input of the plurality of inputs is derived from the electrical signal by the computer prior to the input being received by the discriminative classifier.
A second example is like the first example, except for further comprising a headset that comprises the microphone.
A third example is like the second example, except that in the third example the headset comprises a frame, and the microphone is movably connected to the frame.
A fourth example is like the first example, except that in the fourth example the computer is configured to determine whether the determined indication of the position of the microphone is unacceptable; and provide a signal in response to any determination by the computer that the determined indication of the position of the microphone is unacceptable.
A fifth example is like the fourth example, except for further comprising a headset, wherein: the headset comprises the microphone; the headset further comprises a speaker; the speaker is configured to receive the signal provided by the computer; and the computer is configured so that the signal provided by the computer is configured to cause the speaker to provide an audio indication of the position of the microphone being unacceptable.
A sixth example is like the first example, except that in the sixth example the computer is configured to provide, and the discriminative classifier is configured to receive, at least one Fourier transform as an input of the plurality of inputs.
A seventh example is like the first example, except that in the seventh example the computer is configured to provide, and the discriminative classifier is configured to receive, at least one phenome as an input of the plurality of inputs.
To supplement the present disclosure, this application incorporates entirely by reference the following commonly assigned patents, patent application publications, and patent applications:
In the specification and/or figures, typical embodiments of the invention have been disclosed. The present invention is not limited to such exemplary embodiments. The use of the term “and/or” includes any and all combinations of one or more of the associated listed items. The figures are schematic representations and so are not necessarily drawn to scale. Unless otherwise noted, specific terms have been used in a generic and descriptive sense and not for purposes of limitation.
This application is a continuation of U.S. patent application Ser. No. 15/209,145, filed Jul. 13, 2016, the contents of which are hereby incorporated by reference in its entirety.
Number | Name | Date | Kind |
---|---|---|---|
6832725 | Gardiner et al. | Dec 2004 | B2 |
7128266 | Zhu et al. | Oct 2006 | B2 |
7159783 | Walczyk et al. | Jan 2007 | B2 |
7413127 | Ehrhart et al. | Aug 2008 | B2 |
7496387 | Byford et al. | Feb 2009 | B2 |
7726575 | Wang et al. | Jun 2010 | B2 |
7885419 | Wahl et al. | Feb 2011 | B2 |
8294969 | Plesko | Oct 2012 | B2 |
8317105 | Kotlarsky et al. | Nov 2012 | B2 |
8322622 | Liu | Dec 2012 | B2 |
8366005 | Kotlarsky et al. | Feb 2013 | B2 |
8371507 | Haggerty et al. | Feb 2013 | B2 |
8376233 | Horn et al. | Feb 2013 | B2 |
8381979 | Franz | Feb 2013 | B2 |
8390909 | Plesko | Mar 2013 | B2 |
8408464 | Zhu et al. | Apr 2013 | B2 |
8408468 | Van et al. | Apr 2013 | B2 |
8408469 | Good | Apr 2013 | B2 |
8424768 | Rueblinger et al. | Apr 2013 | B2 |
8448863 | Xian et al. | May 2013 | B2 |
8457013 | Essinger et al. | Jun 2013 | B2 |
8459557 | Havens et al. | Jun 2013 | B2 |
8469272 | Kearney | Jun 2013 | B2 |
8474712 | Kearney et al. | Jul 2013 | B2 |
8479992 | Kotlarsky et al. | Jul 2013 | B2 |
8490877 | Kearney | Jul 2013 | B2 |
8517271 | Kotlarsky et al. | Aug 2013 | B2 |
8523076 | Good | Sep 2013 | B2 |
8528818 | Ehrhart et al. | Sep 2013 | B2 |
8544737 | Gomez et al. | Oct 2013 | B2 |
8548420 | Grunow et al. | Oct 2013 | B2 |
8550335 | Samek et al. | Oct 2013 | B2 |
8550354 | Gannon et al. | Oct 2013 | B2 |
8550357 | Kearney | Oct 2013 | B2 |
8556174 | Kosecki et al. | Oct 2013 | B2 |
8556176 | Van et al. | Oct 2013 | B2 |
8556177 | Hussey et al. | Oct 2013 | B2 |
8559767 | Barber et al. | Oct 2013 | B2 |
8561895 | Gomez et al. | Oct 2013 | B2 |
8561903 | Sauerwein, Jr. | Oct 2013 | B2 |
8561905 | Edmonds et al. | Oct 2013 | B2 |
8565107 | Pease et al. | Oct 2013 | B2 |
8571307 | Li et al. | Oct 2013 | B2 |
8579200 | Samek et al. | Nov 2013 | B2 |
8583924 | Caballero et al. | Nov 2013 | B2 |
8584945 | Wang et al. | Nov 2013 | B2 |
8587595 | Wang | Nov 2013 | B2 |
8587697 | Hussey et al. | Nov 2013 | B2 |
8588869 | Sauerwein et al. | Nov 2013 | B2 |
8590789 | Nahill et al. | Nov 2013 | B2 |
8596539 | Havens et al. | Dec 2013 | B2 |
8596542 | Havens et al. | Dec 2013 | B2 |
8596543 | Havens et al. | Dec 2013 | B2 |
8599271 | Havens et al. | Dec 2013 | B2 |
8599957 | Peake et al. | Dec 2013 | B2 |
8600158 | Li et al. | Dec 2013 | B2 |
8600167 | Showering | Dec 2013 | B2 |
8602309 | Longacre et al. | Dec 2013 | B2 |
8608053 | Meier et al. | Dec 2013 | B2 |
8608071 | Liu et al. | Dec 2013 | B2 |
8611309 | Wang et al. | Dec 2013 | B2 |
8615487 | Gomez et al. | Dec 2013 | B2 |
8621123 | Caballero | Dec 2013 | B2 |
8622303 | Meier et al. | Jan 2014 | B2 |
8628013 | Ding | Jan 2014 | B2 |
8628015 | Wang et al. | Jan 2014 | B2 |
8628016 | Winegar | Jan 2014 | B2 |
8629926 | Wang | Jan 2014 | B2 |
8630491 | Longacre et al. | Jan 2014 | B2 |
8635309 | Berthiaume et al. | Jan 2014 | B2 |
8636200 | Kearney | Jan 2014 | B2 |
8636212 | Nahill et al. | Jan 2014 | B2 |
8636215 | Ding et al. | Jan 2014 | B2 |
8636224 | Wang | Jan 2014 | B2 |
8638806 | Wang et al. | Jan 2014 | B2 |
8640958 | Lu et al. | Feb 2014 | B2 |
8640960 | Wang et al. | Feb 2014 | B2 |
8643717 | Li et al. | Feb 2014 | B2 |
8646692 | Meier et al. | Feb 2014 | B2 |
8646694 | Wang et al. | Feb 2014 | B2 |
8657200 | Ren et al. | Feb 2014 | B2 |
8659397 | Vargo et al. | Feb 2014 | B2 |
8668149 | Good | Mar 2014 | B2 |
8678285 | Kearney | Mar 2014 | B2 |
8678286 | Smith et al. | Mar 2014 | B2 |
8682077 | Longacre, Jr. | Mar 2014 | B1 |
D702237 | Oberpriller et al. | Apr 2014 | S |
8687282 | Feng et al. | Apr 2014 | B2 |
8692927 | Pease et al. | Apr 2014 | B2 |
8695880 | Bremer et al. | Apr 2014 | B2 |
8698949 | Grunow et al. | Apr 2014 | B2 |
8702000 | Barber et al. | Apr 2014 | B2 |
8717494 | Gannon | May 2014 | B2 |
8720783 | Biss et al. | May 2014 | B2 |
8723804 | Fletcher et al. | May 2014 | B2 |
8723904 | Marty et al. | May 2014 | B2 |
8727223 | Wang | May 2014 | B2 |
8740082 | Wilz, Sr. | Jun 2014 | B2 |
8740085 | Furlong et al. | Jun 2014 | B2 |
8746563 | Hennick et al. | Jun 2014 | B2 |
8750445 | Peake et al. | Jun 2014 | B2 |
8752766 | Xian et al. | Jun 2014 | B2 |
8756059 | Braho et al. | Jun 2014 | B2 |
8757495 | Qu et al. | Jun 2014 | B2 |
8760563 | Koziol et al. | Jun 2014 | B2 |
8763909 | Reed et al. | Jul 2014 | B2 |
8777108 | Coyle | Jul 2014 | B2 |
8777109 | Oberpriller et al. | Jul 2014 | B2 |
8779898 | Havens et al. | Jul 2014 | B2 |
8781520 | Payne et al. | Jul 2014 | B2 |
8783573 | Havens et al. | Jul 2014 | B2 |
8789757 | Barten | Jul 2014 | B2 |
8789758 | Hawley et al. | Jul 2014 | B2 |
8789759 | Xian et al. | Jul 2014 | B2 |
8794520 | Wang et al. | Aug 2014 | B2 |
8794522 | Ehrhart | Aug 2014 | B2 |
8794525 | Amundsen et al. | Aug 2014 | B2 |
8794526 | Wang et al. | Aug 2014 | B2 |
8798367 | Ellis | Aug 2014 | B2 |
8807431 | Wang et al. | Aug 2014 | B2 |
8807432 | Van et al. | Aug 2014 | B2 |
8820630 | Qu et al. | Sep 2014 | B2 |
8822848 | Meagher | Sep 2014 | B2 |
8824692 | Sheerin et al. | Sep 2014 | B2 |
8824696 | Braho | Sep 2014 | B2 |
8842849 | Wahl et al. | Sep 2014 | B2 |
8844822 | Kotlarsky et al. | Sep 2014 | B2 |
8844823 | Fritz et al. | Sep 2014 | B2 |
8849019 | Li et al. | Sep 2014 | B2 |
D716285 | Chaney et al. | Oct 2014 | S |
8851383 | Yeakley et al. | Oct 2014 | B2 |
8854633 | Laffargue et al. | Oct 2014 | B2 |
8866963 | Grunow et al. | Oct 2014 | B2 |
8868421 | Braho et al. | Oct 2014 | B2 |
8868519 | Maloy et al. | Oct 2014 | B2 |
8868802 | Barten | Oct 2014 | B2 |
8868803 | Caballero | Oct 2014 | B2 |
8870074 | Gannon | Oct 2014 | B1 |
8879639 | Sauerwein, Jr. | Nov 2014 | B2 |
8880426 | Smith | Nov 2014 | B2 |
8881983 | Havens et al. | Nov 2014 | B2 |
8881987 | Wang | Nov 2014 | B2 |
8903172 | Smith | Dec 2014 | B2 |
8908995 | Benos et al. | Dec 2014 | B2 |
8910870 | Li et al. | Dec 2014 | B2 |
8910875 | Ren et al. | Dec 2014 | B2 |
8914290 | Hendrickson et al. | Dec 2014 | B2 |
8914788 | Pettinelli et al. | Dec 2014 | B2 |
8915439 | Feng et al. | Dec 2014 | B2 |
8915444 | Havens et al. | Dec 2014 | B2 |
8916789 | Woodburn | Dec 2014 | B2 |
8918250 | Hollifield | Dec 2014 | B2 |
8918564 | Caballero | Dec 2014 | B2 |
8925818 | Kosecki et al. | Jan 2015 | B2 |
8939374 | Jovanovski et al. | Jan 2015 | B2 |
8942480 | Ellis | Jan 2015 | B2 |
8944313 | Williams et al. | Feb 2015 | B2 |
8944327 | Meier et al. | Feb 2015 | B2 |
8944332 | Harding et al. | Feb 2015 | B2 |
8950678 | Germaine et al. | Feb 2015 | B2 |
D723560 | Zhou et al. | Mar 2015 | S |
8967468 | Gomez et al. | Mar 2015 | B2 |
8971346 | Sevier | Mar 2015 | B2 |
8976030 | Cunningham et al. | Mar 2015 | B2 |
8976368 | El et al. | Mar 2015 | B2 |
8978981 | Guan | Mar 2015 | B2 |
8978983 | Bremer et al. | Mar 2015 | B2 |
8978984 | Hennick et al. | Mar 2015 | B2 |
8985456 | Zhu et al. | Mar 2015 | B2 |
8985457 | Soule et al. | Mar 2015 | B2 |
8985459 | Kearney et al. | Mar 2015 | B2 |
8985461 | Gelay et al. | Mar 2015 | B2 |
8988578 | Showering | Mar 2015 | B2 |
8988590 | Gillet et al. | Mar 2015 | B2 |
8991704 | Hopper et al. | Mar 2015 | B2 |
8996194 | Davis et al. | Mar 2015 | B2 |
8996384 | Funyak et al. | Mar 2015 | B2 |
8998091 | Edmonds et al. | Apr 2015 | B2 |
9002641 | Showering | Apr 2015 | B2 |
9007368 | Laffargue et al. | Apr 2015 | B2 |
9010641 | Qu et al. | Apr 2015 | B2 |
9015513 | Murawski et al. | Apr 2015 | B2 |
9016576 | Brady et al. | Apr 2015 | B2 |
D730357 | Fitch et al. | May 2015 | S |
9022288 | Nahill et al. | May 2015 | B2 |
9030964 | Essinger et al. | May 2015 | B2 |
9033240 | Smith et al. | May 2015 | B2 |
9033242 | Gillet et al. | May 2015 | B2 |
9036054 | Koziol et al. | May 2015 | B2 |
9037344 | Chamberlin | May 2015 | B2 |
9038911 | Xian et al. | May 2015 | B2 |
9038915 | Smith | May 2015 | B2 |
D730901 | Oberpriller et al. | Jun 2015 | S |
D730902 | Fitch et al. | Jun 2015 | S |
D733112 | Chaney et al. | Jun 2015 | S |
9047098 | Barten | Jun 2015 | B2 |
9047359 | Caballero et al. | Jun 2015 | B2 |
9047420 | Caballero | Jun 2015 | B2 |
9047525 | Barber et al. | Jun 2015 | B2 |
9047531 | Showering et al. | Jun 2015 | B2 |
9049640 | Wang et al. | Jun 2015 | B2 |
9053055 | Caballero | Jun 2015 | B2 |
9053378 | Hou et al. | Jun 2015 | B1 |
9053380 | Xian et al. | Jun 2015 | B2 |
9057641 | Amundsen et al. | Jun 2015 | B2 |
9058526 | Powilleit | Jun 2015 | B2 |
9064165 | Havens et al. | Jun 2015 | B2 |
9064167 | Xian et al. | Jun 2015 | B2 |
9064168 | Todeschini et al. | Jun 2015 | B2 |
9064254 | Todeschini et al. | Jun 2015 | B2 |
9066032 | Wang | Jun 2015 | B2 |
9070032 | Corcoran | Jun 2015 | B2 |
D734339 | Zhou et al. | Jul 2015 | S |
D734751 | Oberpriller et al. | Jul 2015 | S |
9082023 | Feng et al. | Jul 2015 | B2 |
9224022 | Ackley et al. | Dec 2015 | B2 |
9224027 | Van et al. | Dec 2015 | B2 |
D747321 | London et al. | Jan 2016 | S |
9230140 | Ackley | Jan 2016 | B1 |
9236050 | Digregorio | Jan 2016 | B2 |
9250712 | Todeschini | Feb 2016 | B1 |
9258033 | Showering | Feb 2016 | B2 |
9262633 | Todeschini et al. | Feb 2016 | B1 |
9310609 | Rueblinger et al. | Apr 2016 | B2 |
D757009 | Oberpriller et al. | May 2016 | S |
9342724 | McCloskey et al. | May 2016 | B2 |
9375945 | Bowles | Jun 2016 | B1 |
D760719 | Zhou et al. | Jul 2016 | S |
9390596 | Todeschini | Jul 2016 | B1 |
D762604 | Fitch et al. | Aug 2016 | S |
D762647 | Fitch et al. | Aug 2016 | S |
9412242 | Van et al. | Aug 2016 | B2 |
D766244 | Zhou et al. | Sep 2016 | S |
9443123 | Hejl | Sep 2016 | B2 |
9443222 | Singel et al. | Sep 2016 | B2 |
9478113 | Xie et al. | Oct 2016 | B2 |
10085101 | Hardek | Sep 2018 | B2 |
20030067585 | Miller et al. | Apr 2003 | A1 |
20030068057 | Miller et al. | Apr 2003 | A1 |
20060069557 | Barker et al. | Mar 2006 | A1 |
20070038442 | Visser et al. | Feb 2007 | A1 |
20070063048 | Havens et al. | Mar 2007 | A1 |
20080304360 | Mozer | Dec 2008 | A1 |
20090134221 | Zhu et al. | May 2009 | A1 |
20100177076 | Essinger et al. | Jul 2010 | A1 |
20100177080 | Essinger et al. | Jul 2010 | A1 |
20100177707 | Essinger et al. | Jul 2010 | A1 |
20100177749 | Essinger et al. | Jul 2010 | A1 |
20110141925 | Velenko et al. | Jun 2011 | A1 |
20110169999 | Grunow et al. | Jul 2011 | A1 |
20110202554 | Powilleit et al. | Aug 2011 | A1 |
20110255725 | Faltys et al. | Oct 2011 | A1 |
20120111946 | Golant | May 2012 | A1 |
20120168512 | Kotlarsky et al. | Jul 2012 | A1 |
20120193423 | Samek | Aug 2012 | A1 |
20120203647 | Smith | Aug 2012 | A1 |
20120223141 | Good et al. | Sep 2012 | A1 |
20130043312 | Van Horn | Feb 2013 | A1 |
20130075168 | Amundsen et al. | Mar 2013 | A1 |
20130175341 | Kearney et al. | Jul 2013 | A1 |
20130175343 | Good | Jul 2013 | A1 |
20130257744 | Daghigh et al. | Oct 2013 | A1 |
20130257759 | Daghigh | Oct 2013 | A1 |
20130270346 | Xian et al. | Oct 2013 | A1 |
20130287258 | Kearney | Oct 2013 | A1 |
20130292475 | Kotlarsky et al. | Nov 2013 | A1 |
20130292477 | Hennick et al. | Nov 2013 | A1 |
20130293539 | Hunt et al. | Nov 2013 | A1 |
20130293540 | Laffargue et al. | Nov 2013 | A1 |
20130306728 | Thuries et al. | Nov 2013 | A1 |
20130306731 | Pedrao | Nov 2013 | A1 |
20130307964 | Bremer et al. | Nov 2013 | A1 |
20130308625 | Park et al. | Nov 2013 | A1 |
20130313324 | Koziol et al. | Nov 2013 | A1 |
20130313325 | Wilz et al. | Nov 2013 | A1 |
20130342717 | Havens et al. | Dec 2013 | A1 |
20140001267 | Giordano et al. | Jan 2014 | A1 |
20140002828 | Laffargue et al. | Jan 2014 | A1 |
20140008439 | Wang | Jan 2014 | A1 |
20140025584 | Liu et al. | Jan 2014 | A1 |
20140034734 | Sauerwein, Jr. | Feb 2014 | A1 |
20140036848 | Pease et al. | Feb 2014 | A1 |
20140039693 | Havens et al. | Feb 2014 | A1 |
20140042814 | Kather et al. | Feb 2014 | A1 |
20140049120 | Kohtz et al. | Feb 2014 | A1 |
20140049635 | Laffargue et al. | Feb 2014 | A1 |
20140061306 | Wu et al. | Mar 2014 | A1 |
20140063289 | Hussey et al. | Mar 2014 | A1 |
20140066136 | Sauerwein et al. | Mar 2014 | A1 |
20140067692 | Ye et al. | Mar 2014 | A1 |
20140070005 | Nahill et al. | Mar 2014 | A1 |
20140071840 | Venancio | Mar 2014 | A1 |
20140074746 | Wang | Mar 2014 | A1 |
20140076974 | Havens et al. | Mar 2014 | A1 |
20140078341 | Havens et al. | Mar 2014 | A1 |
20140078342 | Li et al. | Mar 2014 | A1 |
20140078345 | Showering | Mar 2014 | A1 |
20140098792 | Wang et al. | Apr 2014 | A1 |
20140100774 | Showering | Apr 2014 | A1 |
20140100813 | Showering | Apr 2014 | A1 |
20140103115 | Meier et al. | Apr 2014 | A1 |
20140104413 | McCloskey et al. | Apr 2014 | A1 |
20140104414 | McCloskey et al. | Apr 2014 | A1 |
20140104416 | Giordano et al. | Apr 2014 | A1 |
20140104451 | Todeschini et al. | Apr 2014 | A1 |
20140106594 | Skvoretz | Apr 2014 | A1 |
20140106725 | Sauerwein, Jr. | Apr 2014 | A1 |
20140108010 | Maltseff et al. | Apr 2014 | A1 |
20140108402 | Gomez et al. | Apr 2014 | A1 |
20140108682 | Caballero | Apr 2014 | A1 |
20140110485 | Toa et al. | Apr 2014 | A1 |
20140114530 | Fitch et al. | Apr 2014 | A1 |
20140124577 | Wang et al. | May 2014 | A1 |
20140124579 | Ding | May 2014 | A1 |
20140125842 | Winegar | May 2014 | A1 |
20140125853 | Wang | May 2014 | A1 |
20140125999 | Longacre et al. | May 2014 | A1 |
20140129378 | Richardson | May 2014 | A1 |
20140131438 | Kearney | May 2014 | A1 |
20140131441 | Nahill et al. | May 2014 | A1 |
20140131443 | Smith | May 2014 | A1 |
20140131444 | Wang | May 2014 | A1 |
20140131445 | Ding et al. | May 2014 | A1 |
20140131448 | Xian et al. | May 2014 | A1 |
20140133379 | Wang et al. | May 2014 | A1 |
20140136208 | Maltseff et al. | May 2014 | A1 |
20140140585 | Wang | May 2014 | A1 |
20140151453 | Meier et al. | Jun 2014 | A1 |
20140152882 | Samek et al. | Jun 2014 | A1 |
20140158770 | Sevier et al. | Jun 2014 | A1 |
20140159869 | Zumsteg et al. | Jun 2014 | A1 |
20140166755 | Liu et al. | Jun 2014 | A1 |
20140166757 | Smith | Jun 2014 | A1 |
20140166759 | Liu et al. | Jun 2014 | A1 |
20140168787 | Wang et al. | Jun 2014 | A1 |
20140175165 | Havens et al. | Jun 2014 | A1 |
20140175172 | Jovanovski et al. | Jun 2014 | A1 |
20140191644 | Chaney | Jul 2014 | A1 |
20140191913 | Ge et al. | Jul 2014 | A1 |
20140197238 | Liu et al. | Jul 2014 | A1 |
20140197239 | Havens et al. | Jul 2014 | A1 |
20140197304 | Feng et al. | Jul 2014 | A1 |
20140203087 | Smith et al. | Jul 2014 | A1 |
20140204268 | Grunow et al. | Jul 2014 | A1 |
20140214631 | Hansen | Jul 2014 | A1 |
20140217166 | Berthiaume et al. | Aug 2014 | A1 |
20140217180 | Liu | Aug 2014 | A1 |
20140231500 | Ehrhart et al. | Aug 2014 | A1 |
20140232930 | Anderson | Aug 2014 | A1 |
20140247315 | Marty et al. | Sep 2014 | A1 |
20140263493 | Amurgis et al. | Sep 2014 | A1 |
20140263645 | Smith et al. | Sep 2014 | A1 |
20140270196 | Braho et al. | Sep 2014 | A1 |
20140270229 | Braho | Sep 2014 | A1 |
20140278387 | Digregorio | Sep 2014 | A1 |
20140282210 | Bianconi | Sep 2014 | A1 |
20140284384 | Lu et al. | Sep 2014 | A1 |
20140288933 | Braho et al. | Sep 2014 | A1 |
20140297058 | Barker et al. | Oct 2014 | A1 |
20140299665 | Barber et al. | Oct 2014 | A1 |
20140312121 | Lu et al. | Oct 2014 | A1 |
20140319220 | Coyle | Oct 2014 | A1 |
20140319221 | Oberpriller et al. | Oct 2014 | A1 |
20140326787 | Barten | Nov 2014 | A1 |
20140332590 | Wang et al. | Nov 2014 | A1 |
20140344943 | Todeschini et al. | Nov 2014 | A1 |
20140346233 | Liu et al. | Nov 2014 | A1 |
20140351317 | Smith et al. | Nov 2014 | A1 |
20140353373 | Van et al. | Dec 2014 | A1 |
20140361073 | Qu et al. | Dec 2014 | A1 |
20140361082 | Xian et al. | Dec 2014 | A1 |
20140362184 | Jovanovski et al. | Dec 2014 | A1 |
20140363015 | Braho | Dec 2014 | A1 |
20140369511 | Sheerin et al. | Dec 2014 | A1 |
20140374483 | Lu | Dec 2014 | A1 |
20140374485 | Xian et al. | Dec 2014 | A1 |
20150001301 | Ouyang | Jan 2015 | A1 |
20150001304 | Todeschini | Jan 2015 | A1 |
20150003673 | Fletcher | Jan 2015 | A1 |
20150009338 | Laffargue et al. | Jan 2015 | A1 |
20150009610 | London et al. | Jan 2015 | A1 |
20150014416 | Kotlarsky et al. | Jan 2015 | A1 |
20150021397 | Rueblinger et al. | Jan 2015 | A1 |
20150028102 | Ren et al. | Jan 2015 | A1 |
20150028103 | Jiang | Jan 2015 | A1 |
20150028104 | Ma et al. | Jan 2015 | A1 |
20150029002 | Yeakley et al. | Jan 2015 | A1 |
20150032709 | Maloy et al. | Jan 2015 | A1 |
20150039309 | Braho et al. | Feb 2015 | A1 |
20150040378 | Saber et al. | Feb 2015 | A1 |
20150048168 | Fritz et al. | Feb 2015 | A1 |
20150049347 | Laffargue et al. | Feb 2015 | A1 |
20150051992 | Smith | Feb 2015 | A1 |
20150053766 | Havens et al. | Feb 2015 | A1 |
20150053768 | Wang et al. | Feb 2015 | A1 |
20150053769 | Thuries et al. | Feb 2015 | A1 |
20150062366 | Liu et al. | Mar 2015 | A1 |
20150063215 | Wang | Mar 2015 | A1 |
20150063676 | Lloyd et al. | Mar 2015 | A1 |
20150069130 | Gannon | Mar 2015 | A1 |
20150071819 | Todeschini | Mar 2015 | A1 |
20150083800 | Li et al. | Mar 2015 | A1 |
20150086114 | Todeschini | Mar 2015 | A1 |
20150088522 | Hendrickson et al. | Mar 2015 | A1 |
20150096872 | Woodburn | Apr 2015 | A1 |
20150099557 | Pettinelli et al. | Apr 2015 | A1 |
20150100196 | Hollifield | Apr 2015 | A1 |
20150102109 | Huck | Apr 2015 | A1 |
20150115035 | Meier et al. | Apr 2015 | A1 |
20150127791 | Kosecki et al. | May 2015 | A1 |
20150128116 | Chen et al. | May 2015 | A1 |
20150129659 | Feng et al. | May 2015 | A1 |
20150133047 | Smith et al. | May 2015 | A1 |
20150134470 | Hejl et al. | May 2015 | A1 |
20150136851 | Harding et al. | May 2015 | A1 |
20150136854 | Lu et al. | May 2015 | A1 |
20150142492 | Kumar | May 2015 | A1 |
20150144692 | Hejl | May 2015 | A1 |
20150144698 | Teng et al. | May 2015 | A1 |
20150144701 | Xian et al. | May 2015 | A1 |
20150149946 | Benos et al. | May 2015 | A1 |
20150161429 | Xian | Jun 2015 | A1 |
20150169925 | Chen et al. | Jun 2015 | A1 |
20150169929 | Williams et al. | Jun 2015 | A1 |
20150186703 | Chen et al. | Jul 2015 | A1 |
20150193644 | Kearney et al. | Jul 2015 | A1 |
20150193645 | Colavito et al. | Jul 2015 | A1 |
20150199957 | Funyak et al. | Jul 2015 | A1 |
20150204671 | Showering | Jul 2015 | A1 |
20150210199 | Payne | Jul 2015 | A1 |
20150220753 | Zhu et al. | Aug 2015 | A1 |
20150254485 | Feng et al. | Sep 2015 | A1 |
20150327012 | Bian et al. | Nov 2015 | A1 |
20160014251 | Hejl | Jan 2016 | A1 |
20160040982 | Li et al. | Feb 2016 | A1 |
20160042241 | Todeschini | Feb 2016 | A1 |
20160057230 | Todeschini et al. | Feb 2016 | A1 |
20160109219 | Ackley et al. | Apr 2016 | A1 |
20160109220 | Laffargue et al. | Apr 2016 | A1 |
20160109224 | Thuries et al. | Apr 2016 | A1 |
20160112631 | Ackley et al. | Apr 2016 | A1 |
20160112643 | Laffargue et al. | Apr 2016 | A1 |
20160124516 | Schoon et al. | May 2016 | A1 |
20160125217 | Todeschini | May 2016 | A1 |
20160125342 | Miller et al. | May 2016 | A1 |
20160125873 | Braho et al. | May 2016 | A1 |
20160133253 | Braho et al. | May 2016 | A1 |
20160171720 | Todeschini | Jun 2016 | A1 |
20160178479 | Goldsmith | Jun 2016 | A1 |
20160180678 | Ackley et al. | Jun 2016 | A1 |
20160189087 | Morton et al. | Jun 2016 | A1 |
20160189716 | Lindahl | Jun 2016 | A1 |
20160227912 | Oberpriller et al. | Aug 2016 | A1 |
20160232891 | Pecorari | Aug 2016 | A1 |
20160292477 | Bidwell | Oct 2016 | A1 |
20160294779 | Yeakley et al. | Oct 2016 | A1 |
20160306769 | Kohtz et al. | Oct 2016 | A1 |
20160314276 | Wilz et al. | Oct 2016 | A1 |
20160314294 | Kubler et al. | Oct 2016 | A1 |
Number | Date | Country |
---|---|---|
2013163789 | Nov 2013 | WO |
2013173985 | Nov 2013 | WO |
2014019130 | Feb 2014 | WO |
2014110495 | Jul 2014 | WO |
Entry |
---|
U.S. Appl. No. 13/367,978, filed Feb. 7, 2012, (Feng et al.); now abandoned. |
U.S. Appl. No. 29/530,600 for Cyclone filed Jun. 18, 2015 (Vargo et al); 16 pages. |
U.S. Appl. No. 29/529,441 for Indicia Reading Device filed Jun. 8, 2015 (Zhou et al.); 14 pages. |
U.S. Appl. No. 29/528,890 for Mobile Computer Housing filed Jun. 2, 2015 (Fitch et al.); 61 pages. |
U.S. Appl. No. 29/526,918 for Charging Base filed May 14, 2015 (Fitch et al.); 10 pages. |
U.S. Appl. No. 29/525,068 for Tablet Computer With Removable Scanning Device filed Apr. 27, 2015 (Schulte et al.); 19 pages. |
U.S. Appl. No. 29/523,098 for Handle for a Tablet Computer filed Apr. 7, 2015 (Bidwell et al.); 17 pages. |
U.S. Appl. No. 29/516,892 for Table Computer filed Feb. 6, 2015 (Bidwell et al.); 13 pages. |
U.S. Appl. No. 29/468,118 for an Electronic Device Case, filed Sep. 26, 2013 (Oberpriller et al.); 44 pages. |
U.S. Appl. No. 14/446,391 for Multifunction Point of Sale Apparatus With Optical Signature Capture filed Jul. 30, 2014 (Good et al.); 37 pages; now abandoned. |
U.S. Appl. No. 14/283,282 for Terminal Having Illumination and Focus Control filed May 21, 2014 (Liu et al.); 31 pages; now abandoned. |
U.S. Appl. No. 14/277,337 for Multipurpose Optical Reader, filed May 14, 2014 (Jovanovski et al.); 59 pages; now abandoned. |
U.S. Appl. for Tracking Battery Conditions filed May 4, 2015 (Young et al.); 70 pages., U.S. Appl. No. 14/702,979. |
U.S. Appl. for Tactile Switch for a Mobile Electronic Device filed Jun. 16, 2015 (Bamdringa); 38 pages., U.S. Appl. No. 14/740,320. |
U.S. Appl. for System and Method for Regulating Barcode Data Injection Into a Running Application on a Smart Device filed May 1, 2015 (Todeschini et al.); 38 pages., U.S. Appl. No. 14/702,110. |
U.S. Appl. for Optical Pattern Projector filed Jun. 23, 2015 (Thuries et al.); 33 pages., U.S. Appl. No. 14/747,197. |
U.S. Appl. for Method and System to Protect Software-Based Network-Connected Devices From Advanced Persistent Threat filed May 6, 2015 (Hussey et al.); 42 pages., U.S. Appl. No. 14/705,407. |
U.S. Appl. for Intermediate Linear Positioning filed May 5, 2015 (Charpentier et al.); 60 pages., U.S. Appl. No. 14/704,050. |
U.S. Appl. for Indicia-Reading Systems Having an Interface With a User's Nervous System filed Jun. 10, 2015 (Todeschini); 39 pages., U.S. Appl. No. 14/735,717. |
U.S. Appl. for Hands-Free Human Machine Interface Responsive to a Driver of a Vehicle filed May 6, 2015 (Fitch et al.); 44 pages., U.S. Appl. No. 14/705,012. |
U.S. Appl. for Evaluating Image Values filed May 19, 2015 (Ackley); 60 pages., U.S. Appl. No. 14/715,916. |
U.S. Appl. for Dual-Projector Three-Dimensional Scanner filed Jun. 23, 2015 (Jovanovski et al.); 40 pages., U.S. Appl. No. 14/747,490. |
U.S. Appl. for Calibrating a Volume Dimensioner filed Jun. 16, 2015 (Ackley et al.); 63 pages., U.S. Appl. No. 14/740,373. |
U.S. Appl. for Augumented Reality Enabled Hazard Display filed May 19, 2015 (Venkatesha et al.); 35 pages., U.S. Appl. No. 14/715,672. |
U.S. Appl. for Application Independent DEX/UCS Interface filed May 8, 2015 (Pape); 47 pages., U.S. Appl. No. 14/707,123. |
Number | Date | Country | |
---|---|---|---|
20180367930 A1 | Dec 2018 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 15209145 | Jul 2016 | US |
Child | 16110602 | US |