Probability-based approach to recognition of user-entered data

Information

  • Patent Grant
  • 9268764
  • Patent Number
    9,268,764
  • Date Filed
    Monday, November 18, 2013
    11 years ago
  • Date Issued
    Tuesday, February 23, 2016
    8 years ago
Abstract
A method for entering keys in a small key pad is provided. The method comprising the steps of: providing at least a part of keyboard having a plurality of keys; and predetermining a first probability of a user striking a key among the plurality of keys. The method further uses a dictionary of selected words associated with the key pad and/or a user.
Description

This application claims the benefit of U.S. patent application Ser. No. 12/186,425 filed Aug. 5, 2008, now U.S. Pat. No. 8,589,149, entitled “A PROBABILITY-BASED APPROACH TO RECOGNITION OF USER-ENTERED DATA” which is incorporated herein by reference in its entirety.


TECHNICAL FIELD

This invention relates to an apparatus and methods for data entry, more specifically this invention relates to an apparatus and methods for a probability-based approach to data entry.


BACKGROUND

Data entry using a key board or a key pad is known. However, a user may mistakenly enter an unintended key within a neighborhood of the intended key. Therefore, it is desirable to provide a probability based scheme to determine the intended input of the user based upon the sequence of entered keys.


SUMMARY

There is provided a method comprising the steps of: providing at least a part of keyboard having a plurality of keys; and associating a probability distribution to each key on the key board.


There is provided a method for entering data by pressing keys on a key pad, be it a key pad with physical keys or an arrangement of domains on a touch screen, comprising the steps of: providing at least a part of keyboard having a plurality of keys; and predetermining probabilities of the user striking a key among the plurality of keys, given the intended key.





BRIEF DESCRIPTION OF THE DRAWINGS

The accompanying figures, where like reference numerals refer to identical or functionally similar elements throughout the separate views and which together with the detailed description below are incorporated in and form part of the specification, serve to further illustrate various embodiments and to explain various principles and advantages all in accordance with the present invention.



FIG. 1 illustrates an example of a continuous probability density based key entry scheme on a portion of a first key board.



FIG. 1A illustrates a discrete probability density based upon FIG. 1.



FIG. 2 illustrates a second key board layout of the present invention.



FIG. 2A illustrates a probability distribution associated to key 1 of FIG. 2.



FIG. 2B illustrates a probability distribution associated to key 2 of FIG. 2.



FIG. 2C illustrates a probability distribution associated to key 5 of FIG. 2.



FIG. 3 is a flowchart of the present invention.





Skilled artisans will appreciate that elements in the figures are illustrated for simplicity and clarity and have not necessarily been drawn to scale. For example, the dimensions of some of the elements in the figures may be exaggerated relative to other elements to help to improve understanding of embodiments of the present invention. In addition, the figures represent just one possible example of the method outlined in the sequel.


DETAILED DESCRIPTION

Before describing in detail embodiments that are in accordance with the present invention, it should be observed that the embodiments reside primarily in combinations of method steps and apparatus components related to discerning and/or using probability based method or apparatus to process user-entered data. Accordingly, the apparatus components and method steps have been represented where appropriate by conventional symbols in the drawings, showing only those specific details that are pertinent to understanding the embodiments of the present invention so as not to obscure the disclosure with details that will be readily apparent to those of ordinary skill in the art having the benefit of the description herein.


In this document, relational terms such as first and second, top and bottom, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. The terms “comprises,” “comprising,” or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. An element proceeded by “comprises . . . a” does not, without more constraints, preclude the existence of additional identical elements in the process, method, article, or apparatus that comprises the element.


The purpose of the present invention is to describe a method and apparatus for discerning user input on portable keyboards in devices such as mobile computers and smartphones, where it is assumed that the means of input (keyboard or touch screen) is such that mistakes sometimes occur (e.g. individual keys on the keyboard are smaller than the human finger, etc.). Listed infra are a few examples. However, the present invention is contemplated for a variety of data entry scenarios including any sized or shaped key pads or key boards, as well as any suitable data entry means.


The present patent application describes two examples for expository purposes: first, typing text on a QWERTY keyboard; and second, entering UPC codes of items on a numerical keypad. We will refer to these examples as “example 1” and “example 2” respectively. It should be understood that the present invention applies to many scenarios beyond these two. The general setup is described below.


Definition of terms:


1. The term “keyboard” comprises any means of user input. The keyboard comprises of keys, which is as previously indicated. The keyboard comprises physical keys or may simply comprise domains on a touch screen. Lowercase Greek letters are used to denote a generic key (for example α; β; etc.), while capital letter such as K will be used to denote the set of all keys.


2. The term “word” will be used to indicate an item of intended user input. If the user is writing for text input, this would be a word in the appropriate language. If, for example, the user is checking inventory by inputting UPC codes subject items in a warehouse environment, a word would be the UPC code of an item in inventory. It is assumed that the user intends on entering a word using the keyboard, and mistakes sometimes occur.


3. The term “dictionary” will be used to indicate a pre-determined set of words. In the case of text input, this will be an actual dictionary or lexicon, where in the case of numerical code input this would be a list of all items, for example, in inventory and their UPC codes.


4. The term “string” will be used in reference to the actual user input. This may or may not be a word, since part of the assumption is that the user is prone to making mistakes. However, it is assumed that each string is meant to be a word from the dictionary.


The proposed general setup is as follows. A keyboard is provided, as is a dictionary of words. It will be assumed that the user intends on entering a word from the subject dictionary using the keyboard. Dependent on the arrangement and form of the provided keyboard, there will be a number associated to each pair of keys (α,β) indicating the probability that key β will be pressed when key a is intended. Thus given a user entered string, one is able to associate to every dictionary word a number indicating the likelihood that the entered string would occur given the dictionary word was intended (see further description infra). This works by viewing each keystroke as an independent event, with probabilities given as described above. Combined with optional additional probabilities indicating the likelihood each word was intended, one gets a probability associated to each dictionary word indicating the likelihood it was intended by the user. These scores or results are then used to rank dictionary words according to the most likely outcome (see further description infra).


Referring to FIG. 1, a part of a QWERY keyboard 100 is shown, in reference to example 1. Assuming a typical user (not shown) intends upon pressing or hitting the “G” key, the user would most likely have a direct hit upon the “G” key. However, the user may hit other keys in close proximity to the “G” key albeit with a lower probability. This scenario occurs most often when the keyboard is too small to accommodate the user's entering means such as fingers. Alternatively, the user may just be careless or has a physical limitation preventing an accurate key entry. As can be seen, FIG. 1 is meant to give a representation of how a user might miss the “G” key; it is a representation of a continuous probability density centered on the “G” key.


Referring to FIG. 1A, a discrete probability density based upon FIG. 1 is shown. Since pressing a key yields the same input regardless of precisely where the key was struck, such a discrete probability density is more useful. As can be seen, intending upon hitting “G” key and actually hitting the “G” key typically has the highest probability. Other keys proximate to the “G” key have relatively low probabilities.


It should be noted that such probability densities are assumed to be arbitrary. We have chosen to represent the specific example of typing on a QWERTY keyboard, where we have chosen the probability densities to be roughly Gaussian. Practically, these probability densities can be preset or determined by experimental testing. The densities are directly related to the “probability matrix” described below.



FIGS. 1-1A generally assume that a user is entering text on a keyboard (physical or touch screen, QWERY or otherwise). The assumption is that the user is entering a word in a predetermined dictionary. The algorithm or a method suitable for computer implementation will attempt to discern the word which the user intends on entering, whereby allowing for the user to make typing errors and correcting the errors based upon probability (see infra). The primary assumption is that the user does not make ‘large’ mistakes, but may make many ‘small’ mistakes. This will be explained precisely infra.


Referring to FIG. 2, a second key board layout 200 of the present invention is shown in relation to example 2. Key board layout 200 has nine keys ranging from one-to-nine (1-9). Key board layout 200 forms a part of a typical numerical key pad.


Referring to FIGS. 2A-2C, a sequence of three scenarios of probability densities of keys on the key board layout 200 of FIG. 2 is shown. Note that the number associated to each key in FIGS. 2A-2C is analogous to the height of the density in FIG. 1A.


In FIG. 2A, a first scenario 202 in which a user intends to strike or press number “1” key is shown. According to this specific probability distribution, the probability of the user hitting number “1” key is 0.5. Similarly, the probability of the user hitting number “2” key and number “4” key are 0.2 respectively. The probability of the user hitting number “5” key is 0.1. Note that it is highly, unlikely that the user will hit keys “3”, “6”, “7”, “8”, and “9”. Therefore, the probability of hitting keys “3”, “6”, “7”, “8”, and “9” is zero.


In FIG. 2B, a second scenario 204 in which a user intends to strike or press number “2” key is shown. According to this specific probability distribution, the probability of the user hitting number “2” key is 0.6. Similarly, the probability of the user hitting number “1” key and number “3” key and number “5” key are 0.1 respectively. The probability of the user hitting number “4” key and number “6” key is 0.05. Note that it is highly, unlikely that the user will hit keys “7”, “8”, and “9”. Therefore, the probability of hitting keys “7”, “8”, and “9” is zero.


In FIG. 2C, a third scenario 206 in which a user intends to strike or press number “5” key is shown. According to this specific probability distribution, the probability of the user hitting number “5” key is 0.6. Similarly, the probability of the user hitting number “2” key, number “4” number “6” key, number “8” keys are 0.1 respectively. Note that it is highly, unlikely that the user will hit keys “1”, “3”, “7”, and “9”. Therefore, the probability of hitting keys “1”, “3”, “7”, and “9” is zero.


As can be seen, FIGS. 2-2C follows example 2 in which the user is entering numerical codes. The numerical codes include codes which correspond to inventory or products (UPC code, for example). Here the ‘keyboard’ might be a small numerical keypad, physical or touch screen. This scenario is used to produce examples infra.


Probability Matrix


The qualities of the keyboard (hardware attributes, shape, number of keys, etc) determine how likely the user is to strike keys other than his intended key. Further, entrenched user typing behaviors sometimes affect the likelihood or the probabilities as well. For each pair of keys (α; β) we give a probability (a number ranging from 0 to 1 indicating a probability or likelihood) that the user strikes β when he intends on striking α. We will call this probability P (α; β). Notice since it is assumed that the user will press some key, we have the relationship

ΣβεKP(α,β)=1 for all αεK  (1)


To account for the scenario when the user misses the keyboard entirely, we can consider the complement of the keyboard as another key in itself. This is particularly applicable to the touch screen scenario.


Once an order is assigned to the keys, this set of probabilities can be written as an n×n matrix, where n denotes the number of keys on our keyboard. We let P={pij}, where pij is the probability that the user presses the jth key when he intends on pressing the ith key. P will be referred to as the “probability matrix”. In terms of this matrix, Eq. 1 indicates that the entries in any row sum to 1.


Suppose our keyboard consists of 9 numerical keys arranged in the format as shown in FIG. 2. Associated to this keyboard, we have a 9×9 matrix, where the ordering of the keys is given by their numerical order.









P
=

(



.5


.2


0


.2


.1


0


0


0


0




.1


.6


.1


.05


.1


.05


0


0


0




0


.2


.5


0


.1


.2


0


0


0




.1


.05


0


.6


.1


0


.1


.05


0




0


.1


0


.1


.6


.1


0


.1


0




0


.05


.1


0


.1


.6


0


.05


.1




0


0


0


.2


.1


0


.5


.2


0




0


0


0


.05


.1


.05


.1


.6


.1




0


0


0


0


.1


.2


0


.2


.5



)





Eq






(
2
)








So, this matrix indicates that the user will press the “6” key 10% of the time he intends on pressing the “5” key (since p56=0.1). Notice the matrix also indicates that the user “will never” miss an intended key by a large amount or on keys not in close proximity to the intended key. For example, since p46=0, it is assumed that the user will never press “6” when “4” is intended. One should compare row 1 of P to FIG. 2A, row 2 to FIG. 2B, and row 5 to FIG. 2C.


The probability matrix (Eq. 2) acts as the model for user input. The more accurate this model, the more efficiently our algorithm or method suitable of computer application will run. Therefore, it is likely that the values for the probability matrix (Eq. 2) associated to a fixed or particular keyboard will be obtained via testing or experiment. It is also possible that the values in the probability matrix (Eq. 2) are user customizable or user specific. It is contemplated that the device of the present invention will initiate a learning phase where the values of the probability matrix are seeded. There may also be stock customizable options (for example, a left-handed user might miss keys differently than a right-handed user).


Comparing to Dictionary Words


The probability matrix (Eq. 2) allows us to associate to every word in our dictionary a probability that the user intended on entering that word given his entered string. This works in the following manner. Suppose the user enters the string “α 1 α 2 α 3”. We consider the dictionary word “β1β2β3”. We know that if the user intended on typing “β1”, he would strike “α 1” with probability P (β1; α 1). Similarly, if the user intended on typing “β2”, he would strike “α 2” with probability P (β2; α 2). Therefore, we can say that if a user intended on typing “β1β2β3”, he would type “α 1 α 2 α 3” with probability P (β; α 1) P (β; α 2) P (β; α 3). In this manner, we associate a number to every dictionary word, based upon the string entered by the user. If the user has entered n letters in the string, only the first n letters of the dictionary words would be used.


Note that this number gives the probability that the user would type the string “α 1 α 2 α 3” if he intended on typing the word “β1β2β3”. We would like to know the probability that the user intended on typing “β1β2β3” given that he typed “α 1 α 2 α 3”. A learned reader will recognize this as a statement of conditional probability. We require an additional piece of information, namely a probability associated to each dictionary word indicating the likelihood that word will be intended. In the text entry example 1, this could be given by word frequency or more sophisticated grammatical tools based on sentence context. In the numerical code entry of example 2, this could be the proportion of each particular item in inventory. The absence of such a likelihood associated to each word can be interpreted as assigning equal likelihood to the occurrence of each dictionary word.


We continue our numerical keypad example 2 as shown in FIG. 2. Here our dictionary is a collection of 4-digit codes which correspond to such things as products in inventory. Suppose the set of these codes is

    • I={1128; 2454; 3856; 9988; 2452; 1324; 6752; 4841}.


The user then enters the string “684”. We then use these three numbers and the values inherent in our probability matrix to associate to each word a probability:












TABLE THREE







Word
Probability









1128
p16p18p24 = 0



2454
p26p48p54 = 0.00025



3856
p36p88p54 = 0.012



9988
p96p98p84 = 0.002



2451
p26p48p54 = 0.0025



1324
p16p38p24 = 0



6752
p66p78p54 = 0.012



4841
p46p88p44 = 0










Assuming that all items exist in equal proportion in inventory, one can then say that the user was most likely trying to enter the codes “6752” or “3856” as both have the highest probability among the set. If it was know that there was a higher proportion of item number “6752” in inventory, then “6752” would then become a better guess than “3856”.


Referring to FIG. 3, a flowchart 300 depicting an example for using the present invention is shown. A part of a key board is formed or provided to a user for entering information (Step 302). A probability distribution of a specific group of users regarding the part of key board is determined (Step 304). The probability distribution may be in the form of a probability matrix such as the one shown supra. A dictionary of comprising predetermined words is provided (Step 306). Associate every word in the dictionary a probability and frequency (a likelihood of occurrence) that the user intended on entering that word based upon his entered string (Step 308). In this manner, we associate a number to every dictionary word, based upon the string entered by the user. If the user has entered n letters in the string, only the first n letters of the dictionary words would be used. Associate the n letters entered to a set of words in the dictionary each having a corresponding probability (Step 310). This probability or the first probability is then multiplied by a second probability that this word is intended (as described in the above paragraph) (Step 311). Note that the absence of this step is tantamount to setting each word to be equally likely which in not desired by the present invention. Choose the words in the set having the highest probability as the likely word entered by the user (Step 312).


As can been seen, the present invention describes a method and apparatus for finding the likelihood of words in dictionaries matching with the user input. There may be one or many matches with varying degree of probabilities based on the user input and the quality of the dictionary.


In the foregoing specification, specific embodiments of the present invention have been described. However, one of ordinary skill in the art appreciates that various modifications and changes can be made without departing from the scope of the present invention as set forth in the claims below. Accordingly, the specification and figures are to be regarded in an illustrative rather than a restrictive sense, and all such modifications are intended to be included within the scope of present invention. The benefits, advantages, solutions to problems, and any element(s) that may cause any benefit, advantage, or solution to occur or become more pronounced are not to be construed as a critical, required, or essential features or elements of any or all the claims. The invention is defined solely by the appended claims including any amendments made during the pendency of this application and all equivalents of those claims as issued.

Claims
  • 1. A computer-implemented method for text input, the method comprising: receiving input indicative of a user striking a pressed key represented on an input device;receiving a probability distribution based at least in part on the location on the input device of the pressed key relative to locations of two or more adjacent keys represented on the input device, wherein, for the pressed key and one of the adjacent keys, the probability distribution indicates a probability that the adjacent key was intended when the pressed key was struck, andwherein the probability distribution indicates a higher probability for adjacent keys in the same row as the pressed key than for adjacent keys in one or more other rows; andin response to receiving the input indicative of a user striking the pressed key, using a processor and the probability distribution to select at least one candidate word.
  • 2. The method of claim 1 further comprising using a dictionary to select the at least one candidate word.
  • 3. The method of claim 1, further comprising: using a dictionary to select the at least one candidate word;wherein the input is one of multiple inputs where each of the multiple inputs are translated into a string, andwherein the method further comprises determining a probability of one or more strings corresponding to the string based at least in part on a probability distribution corresponding to the multiple inputs.
  • 4. The method of claim 1, wherein probabilities of the probability distribution below a particular threshold are set to 0, andwherein the one or more other rows consist of one or more rows perceptually lower on the input device than the same row as the pressed key.
  • 5. A computer-readable memory device storing instructions configured to, when executed by a computing device, cause the computing device to perform operations for text input, the operations comprising: receiving input indicative of a user striking a pressed key represented on an input device;receiving a probability distribution based at least in part on the location on the input device of the pressed key relative to locations of two or more adjacent keys represented on the input device, wherein, for the pressed key and one of the adjacent keys, the probability distribution indicates a probability that the adjacent key was intended when the pressed key was struck, andwherein the probability distribution indicates a higher probability for adjacent keys in the same row as the pressed key than for adjacent keys in one or more other rows; andin response to receiving the input indicative of a user striking the pressed key, using a processor and the probability distribution to select at least one candidate word.
  • 6. The computer-readable memory device of claim 5, wherein the operations further comprise using a dictionary of words to select the at least one candidate word.
  • 7. The computer-readable memory device of claim 5, wherein the operations further comprise using a dictionary of words to select the at least one candidate word,wherein the operations further comprise associating multiple words in the dictionary with a corresponding probability, andwherein each corresponding probability is a probability that the word is intended based at least in part on one or more of a word frequency, a grammatical rule, and a proportion of an item in an inventory.
  • 8. The computer-readable memory device of claim 5, wherein the operations further comprise using a dictionary of words to select the at least one candidate word; andwherein the operations further comprise associating a set of n (n being a natural number) letters entered to a set of words in the dictionary.
  • 9. The computer-readable memory device of claim 5, wherein the operations further comprise using a dictionary of words to select the at least one candidate word; andwherein the operations further comprise associating a set of n (n being a natural number) letters entered to a set of words in the dictionary; andwherein using the probability distribution to select at least one candidate word comprises selecting words in the set of words having the highest probability.
  • 10. The computer-readable memory device of claim 5, wherein the operations further comprise providing a learning phase where values of a probability matrix are seeded.
  • 11. The computer-readable memory device of claim 5, wherein the operations further comprise selecting one of several pre-determined probability matrices based at least in part on handedness of the user, andwherein at least one of the one probability matrices is associated with a left-handed user and at least one other of the probability matrices is associated with a right-handed user.
  • 12. The computer-readable memory device of claim 5, wherein the probability distribution is an N by N matrix where N is a number of keys represented by the input device and wherein a matrix entry at position (I, J) is a probability that the Jth key was intended when the Ith key was pressed.
  • 13. The computer-readable memory device of claim 5, wherein probabilities of the probability distribution below a particular threshold are set to 0, andwherein the one or more other rows consist of one or more rows perceptually lower on the input device than the same row as the pressed key.
  • 14. A text input system comprising: a memory;an input interface configured to receive input indicative of a user striking a pressed key;a probability application module configured to receive a probability distribution, wherein the probability distribution is based at least in part on the location of the pressed key on the input device relative to two or more adjacent keys, wherein, for each pair comprising the pressed key and one of the adjacent keys, the probability distribution indicates a probability that the adjacent key was intended when the pressed key was struck, andwherein the probability distribution indicates a higher probability for adjacent keys in the same row as the pressed key than for adjacent keys in one or more other rows; anda processor configured to, in response to receiving the input indicative of a user striking the pressed key, using the probability distribution to select at least one candidate word.
  • 15. The system of claim 14, wherein the keyboard is a touchscreen keyboard, andwherein the probability distribution comprises an N by N matrix where N is a number of keys represented by the input device and wherein a matrix entry at position (I, J) is a probability that the Jth key was intended when the Ith key was pressed.
  • 16. The system of claim 14, wherein probabilities of the probability distribution below a particular threshold are set to 0, andwherein the one or more other rows consist of one or more rows perceptually lower on the input device than the same row as the pressed key.
  • 17. The system of claim 14, further comprising a probability distribution creation component configured to provide a learning phase where values of one or more probability distributions are seeded.
  • 18. The system of claim 14, wherein the probability distribution is a probability matrix selected, based at least in part on handedness of the user, from one of several pre-determined probability matrices, andwherein at least one of the one probability matrices is associated with a left-handed user and at least one other of the probability matrices is associated with a right-handed user.
  • 19. The system of claim 14, wherein the processor is further configured to select the at least one candidate word by associating each of at least two words in a dictionary with a corresponding probability by multiplying multiple probabilities of one or more probability distributions comprising the probability distribution, andwherein each of the multiple probabilities is associated with a particular key in a received sequence of selected keys comprising the pressed key.
  • 20. The system of claim 14, wherein the processor is further configured to select the at least one candidate word by associating each of at least two words in a dictionary with a corresponding probability by multiplying multiple probabilities of one or more probability distributions comprising the probability distribution, andwherein each of the multiple probabilities is associated with a particular key in a received sequence of selected keys comprising the pressed key, andwherein associating each of the at least two words in the dictionary with the corresponding probability further comprises adjusting at least one of the corresponding probabilities to reflect a frequency of use of the word.
US Referenced Citations (189)
Number Name Date Kind
4730252 Bradshaw Mar 1988 A
5422656 Allard et al. Jun 1995 A
5475735 Williams et al. Dec 1995 A
5638369 Ayerst et al. Jun 1997 A
5675628 Hokkanen Oct 1997 A
5790798 Beckett, II et al. Aug 1998 A
5831598 Kauffert et al. Nov 1998 A
5845211 Roach, Jr. Dec 1998 A
5963666 Fujisaki Oct 1999 A
6031467 Hymel et al. Feb 2000 A
6151507 Laiho et al. Nov 2000 A
6199045 Giniger et al. Mar 2001 B1
6219047 Bell Apr 2001 B1
6245756 Patchev et al. Jun 2001 B1
6246756 Borland et al. Jun 2001 B1
6301480 Kennedy, III et al. Oct 2001 B1
6368205 Frank Apr 2002 B1
6370399 Phillips Apr 2002 B1
6389278 Singh May 2002 B1
6424945 Sorsa Jul 2002 B1
6430252 Reinwand et al. Aug 2002 B2
6430407 Turtiainen Aug 2002 B1
6466783 Dahm et al. Oct 2002 B2
6496979 Chen et al. Dec 2002 B1
6615038 Moles et al. Sep 2003 B1
6618478 Stuckman et al. Sep 2003 B1
6646570 Yamada et al. Nov 2003 B1
6654594 Hughes et al. Nov 2003 B1
6668169 Burgan et al. Dec 2003 B2
6720864 Wong et al. Apr 2004 B1
6748211 Isaac et al. Jun 2004 B1
6766017 Yang Jul 2004 B1
6792280 Hori et al. Sep 2004 B1
6795703 Takae et al. Sep 2004 B2
6819932 Allison et al. Nov 2004 B2
6909910 Pappalardo et al. Jun 2005 B2
6922721 Minborg et al. Jul 2005 B1
6931258 Jarnstrom et al. Aug 2005 B1
6940844 Purkayastha et al. Sep 2005 B2
6944447 Portman et al. Sep 2005 B2
6954754 Peng Oct 2005 B2
6970698 Majmundar et al. Nov 2005 B2
6978147 Coombes Dec 2005 B2
7024199 Massie et al. Apr 2006 B1
7030863 Longe et al. Apr 2006 B2
7031697 Yang et al. Apr 2006 B2
7080321 Aleksander et al. Jul 2006 B2
7092738 Creamer et al. Aug 2006 B2
7098896 Kushler et al. Aug 2006 B2
7099288 Parker et al. Aug 2006 B1
7117144 Goodman et al. Oct 2006 B2
7129932 Klarlund et al. Oct 2006 B1
7170993 Anderson et al. Jan 2007 B2
7177665 Ishigaki Feb 2007 B2
7190960 Wilson et al. Mar 2007 B2
7194257 House et al. Mar 2007 B2
7209964 Dugan et al. Apr 2007 B2
7218249 Chadha May 2007 B2
7236166 Zinniel et al. Jun 2007 B2
7259751 Hughes et al. Aug 2007 B2
7277088 Robinson et al. Oct 2007 B2
7287026 Oommen Oct 2007 B2
7295836 Yach et al. Nov 2007 B2
7308497 Louviere et al. Dec 2007 B2
7319957 Robinson et al. Jan 2008 B2
7353016 Roundtree et al. Apr 2008 B2
7359706 Zhao Apr 2008 B2
7376414 Engstrom May 2008 B2
7379969 Osborn, Jr. May 2008 B2
7453439 Kushler et al. Nov 2008 B1
7493381 Garg Feb 2009 B2
7539484 Roundtree May 2009 B2
7542029 Kushler Jun 2009 B2
7558965 Wheeler et al. Jul 2009 B2
7602377 Kim Oct 2009 B2
7647075 Tsuda et al. Jan 2010 B2
7647527 Duan et al. Jan 2010 B2
7660870 Vandermeijden et al. Feb 2010 B2
7676221 Roundtree et al. Mar 2010 B2
7689588 Badr et al. Mar 2010 B2
7725072 Bettis et al. May 2010 B2
7735021 Padawer et al. Jun 2010 B2
7756545 Roundtree Jul 2010 B2
7773982 Bishop et al. Aug 2010 B2
7777728 Rainisto Aug 2010 B2
7783729 Macaluso Aug 2010 B1
7809376 Letourneau et al. Oct 2010 B2
7809574 Roth Oct 2010 B2
7810030 Wu et al. Oct 2010 B2
7881703 Roundtree et al. Feb 2011 B2
7899915 Reisman Mar 2011 B2
7912700 Bower et al. Mar 2011 B2
7920132 Longe et al. Apr 2011 B2
7957955 Christie et al. Jun 2011 B2
8036645 Roundtree et al. Oct 2011 B2
8201087 Kay et al. Jun 2012 B2
8285263 Roundtree et al. Oct 2012 B2
8301123 Roundtree et al. Oct 2012 B2
8589149 Cecil et al. Nov 2013 B2
8666728 Rigazio et al. Mar 2014 B2
8934611 Doulton Jan 2015 B2
9043725 Wakefield May 2015 B2
20010046886 Ishigaki Nov 2001 A1
20020065109 Mansikkaniemi et al. May 2002 A1
20020112172 Simmons Aug 2002 A1
20020115476 Padawer et al. Aug 2002 A1
20020123368 Yamadera et al. Sep 2002 A1
20020155476 Pourmand et al. Oct 2002 A1
20030011574 Goodman Jan 2003 A1
20030013483 Ausems et al. Jan 2003 A1
20030039948 Donahue Feb 2003 A1
20030112931 Brown et al. Jun 2003 A1
20030187650 Moore et al. Oct 2003 A1
20030204725 Itoi et al. Oct 2003 A1
20030233461 Mariblanca-Nieves et al. Dec 2003 A1
20040136564 Roeber et al. Jul 2004 A1
20040142720 Smethers Jul 2004 A1
20040171375 Chow-Toun Sep 2004 A1
20040172561 Iga Sep 2004 A1
20040183833 Chua Sep 2004 A1
20040193444 Hufford et al. Sep 2004 A1
20040198316 Johnson Oct 2004 A1
20040242209 Kruis et al. Dec 2004 A1
20050003850 Tsuda et al. Jan 2005 A1
20050108329 Weaver et al. May 2005 A1
20050114115 Karidis et al. May 2005 A1
20050116840 Simelius Jun 2005 A1
20050120345 Carson Jun 2005 A1
20050266798 Moloney et al. Dec 2005 A1
20050289463 Wu et al. Dec 2005 A1
20060003758 Bishop et al. Jan 2006 A1
20060047830 Nair et al. Mar 2006 A1
20060158436 LaPointe et al. Jul 2006 A1
20060168303 Oyama et al. Jul 2006 A1
20060176283 Suraqui Aug 2006 A1
20060229054 Erola et al. Oct 2006 A1
20060245391 Vaidya et al. Nov 2006 A1
20060274869 Morse Dec 2006 A1
20070019563 Ramachandran et al. Jan 2007 A1
20070061243 Ramer et al. Mar 2007 A1
20070064743 Bettis et al. Mar 2007 A1
20070188472 Ghassabian Aug 2007 A1
20070211923 Kuhlman Sep 2007 A1
20070293199 Roundtree et al. Dec 2007 A1
20070293200 Roundtree et al. Dec 2007 A1
20080039056 Mathews et al. Feb 2008 A1
20080119217 Coxhill May 2008 A1
20080133222 Kogan et al. Jun 2008 A1
20080138135 Gutowitz Jun 2008 A1
20080189550 Roundtree Aug 2008 A1
20080189605 Kay Aug 2008 A1
20080194296 Roundtree Aug 2008 A1
20080256447 Roundtree et al. Oct 2008 A1
20080280588 Roundtree et al. Nov 2008 A1
20090008234 Tolbert Jan 2009 A1
20090089666 White et al. Apr 2009 A1
20090109067 Burstrom Apr 2009 A1
20090124271 Roundtree et al. May 2009 A1
20090249198 Davis et al. Oct 2009 A1
20090254912 Roundtree et al. Oct 2009 A1
20090262078 Pizzi Oct 2009 A1
20090328101 Suomela Dec 2009 A1
20100035583 O'Connell Feb 2010 A1
20100056114 Roundtree et al. Mar 2010 A1
20100087175 Roundtree Apr 2010 A1
20100093396 Roundtree Apr 2010 A1
20100112997 Roundtree May 2010 A1
20100134328 Gutowitz Jun 2010 A1
20100144325 Roundtree et al. Jun 2010 A1
20100159902 Roundtree et al. Jun 2010 A1
20100169443 Roundtree et al. Jul 2010 A1
20100225591 Macfarlane Sep 2010 A1
20100279669 Roundtree Nov 2010 A1
20100304704 Najafi Dec 2010 A1
20100321299 Shelley et al. Dec 2010 A1
20110018812 Baird Jan 2011 A1
20110117894 Roundtree et al. May 2011 A1
20110119623 Kim et al. May 2011 A1
20110193797 Unruh Aug 2011 A1
20110248924 Bhattacharjee Oct 2011 A1
20110264442 Huang Oct 2011 A1
20120028620 Roundtree et al. Feb 2012 A1
20120047454 Harte Feb 2012 A1
20120203544 Kushler Aug 2012 A1
20120254744 Kay Oct 2012 A1
20130005312 Roundtree et al. Jan 2013 A1
20130120278 Cantrell May 2013 A1
20140198047 Unruh et al. Jul 2014 A1
20140198048 Unruh et al. Jul 2014 A1
Foreign Referenced Citations (47)
Number Date Country
2454334 Feb 2003 CA
1283341 Feb 2001 CN
2478292 Feb 2002 CN
1361995 Jul 2002 CN
1611087 Apr 2005 CN
1387241 Feb 2004 EP
1538855 Jun 2005 EP
2340344 Feb 2000 GB
2365711 Feb 2002 GB
62072058 Apr 1987 JP
07203536 Aug 1995 JP
08166945 Jun 1996 JP
09-507986 Aug 1997 JP
10084404 Mar 1998 JP
11-195062 Jul 1999 JP
11259199 Sep 1999 JP
2000348022 Dec 2000 JP
2001069204 Mar 2001 JP
2002-135848 May 2002 JP
200305880 Jan 2003 JP
2003067334 Mar 2003 JP
2003108182 Apr 2003 JP
2003-188982 Jul 2003 JP
2003186590 Jul 2003 JP
2003-280803 Oct 2003 JP
2003-303172 Oct 2003 JP
2003-308148 Oct 2003 JP
2003-309880 Oct 2003 JP
2004021580 Jan 2004 JP
2004032056 Jan 2004 JP
2004-048790 Feb 2004 JP
2004364122 Dec 2004 JP
2005110063 Apr 2005 JP
2005167463 Jun 2005 JP
200344337 Jun 2003 KR
WO-9707641 Feb 1997 WO
WO-0070888 Nov 2000 WO
WO-0186472 Nov 2001 WO
WO-2005081852 Sep 2005 WO
WO-2005083996 Sep 2005 WO
WO-2007002499 Jan 2007 WO
WO-2007044972 Apr 2007 WO
WO-2007070837 Jun 2007 WO
WO-2007092908 Aug 2007 WO
WO-2008042989 Apr 2008 WO
WO-2008086320 Jul 2008 WO
WO-2008128119 Oct 2008 WO
Non-Patent Literature Citations (24)
Entry
International Search Report and Written Opinion for International Application No. PCT/US2012/023884, mailing date Sep. 3, 2012, 8 pages.
International Search Report for International Application No. PCT/US2014/011538, mailing date Apr. 16, 2014, 3 pages.
3rd Generation Partnership Project. “Specification of the SIM Application Toolkit for the Subscriber Identity Module—MobileEquipment (SIM-ME) Interface (Release 1999),” 3GPPOrganizational Partners, 2004, 143 pages.
Center for Customer Driven Quality at Purdue University, “It's the Solution, Stupid,” 2004, 2 pages.
European Search Report for European Application No. 05713762.2, dated Jun. 27, 2008, Applicant SNAPin Software Inc., 6 pages.
Exam Report for Canadian Application No. 2,556,773, Mail Date Dec. 5, 2011, 2 pages.
Extended European Search Report for European Application No. 12005720.3, Dated Dec. 20, 2012, 6 pages.
Extended European Search Report for European Application No. 12005720.3, Dated Dec. 12, 2012, 5 pages.
Gartner, “Contact Center Investment Strategy and Leading Edge technologies,” http://www.gartner.com/4—decision—tools/measurement/measure—it—articles/2002—12/contact—center—investment—strategy—jsp, accessed on Jul. 8, 2008, 4 pages.
International Search Report for International Application No. PCT/US06/40398, dated Jul. 15, 2008, Applicant SNAPin Software Inc.
International Search Report for International Application No. PCT/US08/60137, dated Jun. 30, 2008, Applicant SNAPinSoftware Inc.
International Search Report of International Application No. PCT/US05/05135, dated Oct. 26, 2005, 3 pages.
International Search Report of International Application No. PCT/US05/33973, dated Apr. 19, 2006.
International Search Report of International Application No. PCT/US05/5517, dated Jul. 6, 2005.
International Search Report of International Application No. PCT/US06/24637, dated Aug. 1, 2007.
International Search Report of International Application No. PCT/US07/61806, dated Feb. 13, 2008.
International Search Report of International Application No. PCT/US08/50447, dated Apr. 10, 2008.
Japanese Office Action dated Jan. 23, 2012 for Japanese Patent Application No. 2008-230347, 69 pages.
Japanese Office Action dated Jun. 16, 2008 under Japanese Patent Application No. 2006-554217, 10 pages.
Japanese Office Action dated Oct. 27, 2011 under Japanese Patent Application No. 2008-545964, 39 pages.
Second Office Action for Chinese Patent Application No. 200580011621.9, Mail Date Dec. 11, 2009, 22 pages.
SNAPin Software Inc., “SNAPin White Paper: The Service Experience Opportunity,” <http://www.snapin.com>, 2005, 16 pages.
Supplementary European Search Report for European Patent Application No. 05723446, Mail Date Jan. 8, 2008, 6 pages.
Supplementary European Search Report for European Patent Application No. 06840247.8, Mail Date Feb. 2, 2012, 8 pages.
Related Publications (1)
Number Date Country
20140074458 A1 Mar 2014 US
Continuations (1)
Number Date Country
Parent 12186425 Aug 2008 US
Child 14083296 US