Systems, methods and apparatus for evaluating status of computing device user

Information

  • Patent Grant
  • 11775853
  • Patent Number
    11,775,853
  • Date Filed
    Monday, September 23, 2019
    4 years ago
  • Date Issued
    Tuesday, October 3, 2023
    8 months ago
  • Inventors
  • Original Assignees
    • Nobots LLC (Renton, WA, US)
  • Examiners
    • Lipman; Jacob
    Agents
    • Lee & Hayes, P.C.
    • Koempel-Thomas; Bea
Abstract
Methods, systems and apparatus for assessing the likely status of an operator of a computing device interacting with a server as a human operator or an autonomic computer application, such as a “bot” are described herein. By monitoring at least some data, e.g., biometric data, generated at the client computing device, a comparison can be made between the monitored data and model data relating to human interaction with the computing device. The results of the comparison can lead to a value that represents the likelihood that the monitored data results from human interaction.
Description
BACKGROUND

The Internet is a fantastic tool for constructive web sites to gather users for a common purpose; however, the Internet is also a fantastic tool for abuse of these same web sites. People who want to take advantage of websites do so by creating automated programs employing various algorithms and routines (hereinafter “bots”) that create fictitious accounts or access content for a multitude of reasons.


In an effort to block these bots, builders of web sites have created a variety of tests to determine if the user is a bot or if the user is a human. Initial efforts required a user to simply enter an alphanumeric string into an input field. However, as character recognition engines became more available, such “tests” became easily defeated. What was needed was a more robust form of test—one that couldn't be easily defeated.


Carnegie Mellon University coined the term “CAPTCHA” (Completely Automated Public Turing test to tell Computers and Humans Apart) for these types of tests. A common type of CAPTCHA requires that the user type the letters, digits or characters of a distorted image appearing on the screen. The objective is to create an image that˜bot cannot easily parse but that is discernable by a human. Such efforts have been successful in preventing non-adaptive software from recognizing the imaged characters, but people intent on abusing these sites have designed ways to circumvent the CAPTCHA, such as through specially tuned character recognition programs. A brief survey of the Internet will reveal many resources that describe how to tune and/or use character recognition to decipher CAPTCHA including aiCaptcha, Simon Fraser University and PWNtcha.


The result of the foregoing is that while CAPTCHAs are becoming increasingly more difficult for bots, they are also becoming more difficult and/or burdensome for human users. In certain instances, the desire to defeat the bots has resulted in images that are so distorted that some human users cannot decipher the images. This is particularly true with users having a visual deficiency or imparity. As a partial solution to this escalation of perception difficulty, some web sites have begun adding a link to a sound file that will speak the characters, but these sound files are also being drastically distorted to protect against being discerned by bots through speech pattern matching algorithms. Other web sites like Facebook.com, have gone so far as to adopt a practice requiring deciphering two distorted word images to increase the complexity for bots. While perhaps achieving the stated objective, the collateral effect is to exacerbate the existing burden to human users.


Current CAPTCHA technology is visual or auditory in nature, requiring the human user to answer a test that should be simple to most humans but difficult for non-humans, e.g., bots. Visual CAPTCHA using distorted images is widely used as the primary test by nearly every top Internet site including Yahoo, Google, You Tube, Microsoft's Live ID, MySpace, Facebook, Wikipedia, Craigs List. By using solely visual testing criteria, nearly all users will be able to invoke the requested action; not all users have functioning audio equipment or environments such as libraries may not permit such use.


A positive user experience is critical to the success and increased popularity of a given website. Designers of web sites go to great lengths to ensure their website is as user friendly as possible. Carnegie Mellon University estimates that 60 million CAPTCHA tests are deciphered every day and with an average time spent of 10 seconds, requiring a total of 150,000 hours of work spent every day trying to protect web sites from bots. Reducing or eliminating the requirement of a user having to decipher CAPTCHA is one more way websites can create a more positive user experience for their visitors and minimize opportunity costs.


SUMMARY OF THE INVENTION

The invention is generally directed to methods, systems and apparatus for assessing the likely user status of a computing device interacting with a server where computing device is in bi-directional operative communication with the server wherein the status is one of a human operator or a computer executable program (also referred to herein as a “bot”). This assessment comprises comparing acquired and/or available data relating to the operation of the computing device to suitable models embodying human user derived data (model data). In most embodiments, the comparison yields a probability value as to one of the status states 140, 330, which then may be used by a program or administrator of the server to permit or deny access and/or operation to the computing device. Because many of the invention embodiments provide a probability result as opposed to a binary result, the invention embodiments avoid the “there is only one right answer” phenomena inherent in prior art CAPTCHA tests. In other words, rather than placing the burden of proof on the user for functionality/access, which if the user is a human invokes the negative consequences of conventional CAPTCHA tests as previously described, the burden is shifted to the server side of the equation.





BRIEF DESCRIPTION OF THE DRAWINGS

The detailed description is described with reference to the accompanying figures. The use of the same reference numbers in different figures indicates similar or identical components or features.



FIG. 1 illustrates an overview of the process described in this disclosure.



FIG. 2 illustrates in more detail the first step 110 of FIG. 1 (a user interacts with a web page).



FIG. 3 illustrates in more detail the second step 120 of FIG. 1 (the user submits a form to the server).



FIG. 4 illustrates in more detail the third step 130 of FIG. 1 (active and passive data related to the user is analyzed and a human confidence score is generated).





DETAILED DESCRIPTION

As used herein, “model data”, its equivalents and verb forms comprises data indicative of human interaction with a computing environment and that can be received by a computing device that is physically remote from the sample computing environment and equivalents. Model data comprises two main categories: active model data and passive model data. Active model data 220 comprises data 210 acquired from a computing device user's interactions therewith and within the computing environment where such data is not normally stored (logged) or transmitted to a remote location. Such model data includes, without limitation, pointing device vector movements and/or cadence, key stroke combinations and/or cadence, time differentials between stimulus (e.g., display of dialog box, radio button, form field, etc., and/or generation of sound) and user response (e.g., input into dialog box, selection of radio button, completion of form field, new page display request rates, etc., and/or input response to sound), and similar metrics. Generally, such data must be monitored and stored 210, 220 by a program operative on the computing device, which makes the data available to another program, preferably on a server 320, or actively transmits such data to a server. Passive model data comprises data available from a computing device user's interactions therewith and within the computing environment where such data is normally stored (logged) or transmitted to a remote location. Such model data includes, without limitation, browser cookies, destination IP histories, originating IP address, originating IP address traffic data, originating IP address physical location, third party data regarding abusers (including originating IP addresses and physical locations), etc.


Also as used herein, the term “available data”, its equivalents and verb forms comprises data associated with a computing device's operation and its interaction with a computing environment, such as the Internet, that is generally recorded within the computing device and/or by other devices that have been affected by the computing device's operation—this is also a type of passive data; the term “acquired data”, its equivalents and verb forms comprises data associated with a computing device's operation and its interaction with a computing environment, such as the Internet, that is generally not recorded within the computing device and/or by other devices that have been affected by the computing device's operation, but at least some data of which has/have been recorded and/or transmitted to a remote location, such as a server—this is a type of active data.


In addition to the foregoing, the term “issued data”, its equivalents and verb forms comprises data generated by a server or other computing device that is not the same as the computing device for which the assessment as to user status is being performed; “monitored data”, its equivalents and verb forms comprises active or passive data, whether available or acquired, obtained from the computing device, or as a result of its external interactions, after the generation of issued data; “interest data”, its equivalents and verb forms comprises active or passive data, whether available or acquired, that correlates to any data within model data, whether obtained prior to or after the generation of issued data. Thus, interest data includes time independent available data and acquired data, unless qualified differently.


With the foregoing definitions in mind, operation of the various invention embodiments can be better understood. In a first series of embodiments, a comparison between interest data, acquired prior to delivery of issued data to the client computing device, and model data is performed to ascertain the likely status of the client computing device, i.e., human user or bot 130, 420. In a second series of embodiments, a comparison between monitored data, by definition acquired after delivery of issued data to the client computing device, and model data is performed to ascertain the likely status of the client computing device, i.e., human user or bot 130, 240. In both series of embodiments, acquired and/or available data may be used for comparison with suitable model data. The recited comparisons can take place locally on the computing device, remotely on the originating server, or on a server dedicated to performing such actions and for which subscriptions may be offered in conjunction with methods for providing services according to the methods, apparatus and systems embodiments described herein.


While available data represents data that is readily harvestable by query, for example, from the computing device or the computing environment in which the device operates, acquired data requires some form of information capture means. In the various embodiments described herein, the computing device is caused to monitor and retain certain data useful as acquired data for comparison purposes. Such monitoring and retaining means for acquiring data from the computing device comprises, without limitation, modification of (an) existing program(s) (e.g., such means are included in available browsers), a covert program (e.g., many malware applications log keystrokes and periodically pass them to remote servers for malicious purposes; similar technology can be used to exploit necessary aspects of the invention embodiments), or a servlet/Java applet. If user privacy is a concern, the monitoring and retaining means can remain dormant until activated by, for example, an enabled web site 110.


The monitoring and retaining means may also enable transmission of some or all retained data 410, in encrypted or unencrypted form, as may be desired for privacy and security purposes, and/or merely retain the data until requested from, for example, the server, at which time some or all data may be transmitted 120, 310. As described above with reference to the comparison actions 130, 410, such receiving and/or polling actions can be carried out remotely on the originating server or on a server dedicated to performing such actions, if not performed locally on the computing device.


From the foregoing, it can be seen that implementation of the invention embodiments can be accomplished exclusively from the server side; it is not necessary to distribute or install in the conventional sense client side software. Existing available browsers and operating systems provide the means necessary to temporarily install logging code, if such is elected. Moreover, the methods, and associated systems and apparatus, described herein are highly transparent to the user, thereby achieving an objective of enhancing the user's experience of a web site employing bot assessment protocols.


DESCRIPTION OF AN INVENTION EMBODIMENT

A primary objective of bot creation is to autonomously access data and/or functionality of a target server as quickly as possible. By assessing user biometrics having a time domain, the time variable becomes a necessary component to accessing the data and/or functionality of the server. Because such assessment has heretofore been absent as a valid CAPTCHA marker of a human user, and more importantly because proper data input would necessarily slow the process, the likelihood of bot penetration has been significantly reduced.


An embodiment of the invention employs a first layer of testing that simply checks if there were valid mouse movements and/or key strokes inputted by the user of a computing device that is attempting to access a server resource “protected” from bots. This basic “if-then” check is essentially without overhead since there are no computations being carried out. Checking for the existence of the target activity therefore represents a first pass evaluation; if the bot is not programmed to include pseudo biometric data, further access is denied. In other words, if no activity is recorded there is a very high probability that the user is actually a bot.


A fundamental premise of robust biometrics is that a given dataset for each person is unique. Therefore, if the dataset is sufficiently robust, it is impossible to have duplicative input data unless the input data was derived from a machine. Exploiting this premise allows a second level knockout assessment to deny user access if the input data exactly (or statistically sufficiently} matches previously recorded data. Of course, the skilled practitioner employing this method can select (either explicitly or via programming} sample points of a dataset for comparison as opposed to all data, thereby reducing computational overhead and storage issues. Alternatively, if samples are used, an exact match could then invoke a more intensive comparison with the same stored datasets, where again access can be denied when an exact or statistically sufficient match is found.


In the foregoing two assessments, an object has been to ferret out bots in an efficient and low overhead manner by exploiting intrinsic design limitations. However, it is possible that a bot designer could spoof these assessment means by, for example, running many bots in parallel wherein intrinsic delays in CPU processing and bandwidth would introduce inherent time delays associated with the very inputs being assessed. Therefore, more robust assessment means may be employed to ascertain the presence of a bot.


In robust embodiments of the invention, a third layer of testing may be employed that compares recorded pointer movements and key strokes to previously recorded activity for a given input page that was knowingly created by humans. Thus, as input data is collected for a given page, patterns will emerge that are unique to human activity. Subsequently recorded activity that is inconsistent with these patterns would indicate the potential that the user is a bot. Access could then be denied, or further CAPTCHA tests presented. Alternatively, access could be granted since no lock is pick proof and an object of the invention embodiments is to minimize user exposure to CAPTCHA tests.

Claims
  • 1. A method comprising: providing data collection data that causes computing devices to collect active data;receiving interest data including active data and passive data from a computing device attempting access to a protected page of a website;performing an assessment comprising assessing the received interest data including at least some active data and at least some passive data in conjunction with model data relating to prior sessions involving human interactions with the website; andgenerating a probability value based on the assessment, wherein the probability value represents a likelihood that an operator of the computing device is a human being rather than an autonomic computer application.
  • 2. A method as claim 1 recites, wherein the model data comprises both passive model data and active model data.
  • 3. A method as claim 1 recites, further comprising granting access to the protected page if the probability value indicates a greater likelihood that the operator of the computing device is a human.
  • 4. A method as claim 1 recites, further comprising providing a Completely Automated Public Turing test to tell Computers and Humans Apart (CAPTCHA) to the computing device attempting to access the server if the probability value does not indicate a greater likelihood that the operator of the computing device is a human.
  • 5. A method as claim 1 recites, further comprising denying access to the protected page if the probability value indicates a greater likelihood that the operator of the computing device is an autonomic computer application.
  • 6. A method as claim 1 recites, wherein the data collection data comprises JavaScript.
  • 7. A method as claim 1 recites, further comprising decrypting the received interest data prior to performing the assessment.
  • 8. A method as claim 1 recites, wherein the passive data includes at least browser cookies.
  • 9. A method as claim 1 recites, wherein the active data included in the received interest data includes at least one of pointing device vector movements or pointing device vector cadence.
  • 10. A non-transitory machine readable medium having stored thereon instructions, which when executed by at least one machine, configure the at least one machine to perform operations comprising: provide data collection data that causes computing devices to collect active data;receive interest data including active data and passive data from a computing device attempting access to a protected page of a website;perform an assessment comprising assessing the received interest data including at least some active data and at least some passive data in conjunction with model data relating to prior sessions involving human interactions with the website; andgenerate a probability value based on the assessment, wherein the probability value represents a likelihood that an operator of the computing device is a human being rather than an autonomic computer application.
  • 11. A non-transitory machine readable medium as claim 10 recites, wherein the model data comprises both passive model data and active model data.
  • 12. A non-transitory machine readable medium as claim 10 recites, wherein the instructions, when executed by the at least one machine, configure the at least one machine to grant access to the protected page if the probability value indicates a greater likelihood that the operator of the computing device is a human.
  • 13. A non-transitory machine readable medium as claim 10 recites, wherein the instructions, when executed by the at least one machine, configure the at least one machine to provide a Completely Automated Public Turing test to tell Computers and Humans Apart (CAPTCHA) to the computing device attempting to access the server if the probability value does not indicate a greater likelihood that the operator of the computing device is a human.
  • 14. A non-transitory machine readable medium as claim 10 recites, wherein the instructions, when executed by the at least one machine, configure the at least one machine to deny access to the protected page if the probability value indicates a greater likelihood that the operator of the computing device is an autonomic computer application.
  • 15. A non-transitory machine readable medium as claim 10 recites, wherein the data collection comprises JavaScript.
  • 16. A non-transitory machine readable medium as claim 10 recites, wherein the instructions, when executed by the at least one machine, configure the at least one machine to decrypt the received interest data prior to performing the assessment.
  • 17. A non-transitory machine readable medium as claim 10 recites, wherein the passive data includes at least browser cookies.
  • 18. A non-transitory machine readable medium as claim 10 recites, wherein the active data included in the received interest data includes at least one of pointing device vector movements or pointing device vector cadence.
  • 19. A server system comprising: a processing device configured to execute computer instructions;a non-transitory machine readable medium having stored thereon the computer instructions, which when executed by the processing device, cause a server of the server system to: provide data collection data that causes computing devices to collect active data;receive interest data including active data and passive data from a computing device attempting access to a protected page of a website;perform an assessment comprising assessing the received interest data including at least some active data and at least some passive data in conjunction with model data relating to prior sessions involving human interactions with the website; andgenerate a probability value based on the assessment, wherein the probability value represents a likelihood that an operator of the computing device is a human being rather than an autonomic computer application.
  • 20. A server system as claim 19 recites, wherein the model data comprises both passive model data and active model data.
  • 21. A server system as claim 19 recites, wherein the computer instructions, when executed by the processing device, cause the server of the server system to grant access to the protected page if the probability value indicates a greater likelihood that the operator of the computing device is a human.
  • 22. A server system as claim 19 recites, wherein the computer instructions, when executed by the processing device, cause the server of the server system to provide a Completely Automated Public Turing test to tell Computers and Humans Apart (CAPTCHA) to the computing device attempting to access the server if the probability value does not indicate a greater likelihood that the operator of the computing device is a human.
  • 23. A server system as claim 19 recites, wherein the computer instructions, when executed by the processing device, cause the server of the server system to deny access to the protected page if the probability value indicates a greater likelihood that the operator of the computing device is an autonomic computer application.
  • 24. A server system as claim 19 recites, wherein the data collection comprises JavaScript.
  • 25. A server system as claim 19 recites, wherein the computer instructions, when executed by the processing device, cause the server of the server system to decrypt the received interest data prior to performing the assessment.
  • 26. A server system as claim 19 recites, wherein the passive data includes at least browser cookies.
  • 27. A server system as claim 19 recites, wherein the active data included in the received interest data includes at least one of pointing device vector movements or pointing device vector cadence.
CROSS REFERENCE TO RELATED APPLICATIONS

This application is a divisional application of application Ser. No. 15/457,099 filed Mar. 13, 2017, which claims priority to application Ser. No. 12/313,502 filed Nov. 19, 2008 and issued as U.S. Pat. No. 9,595,008 on Mar. 14, 2017, which claims priority to provisional application Ser. No. 61/003,743 filed Nov. 19, 2007, all of which are incorporated herein by reference.

US Referenced Citations (105)
Number Name Date Kind
5375244 McNair Dec 1994 A
5933498 Schneck et al. Aug 1999 A
6195698 Lillibridge et al. Feb 2001 B1
6405922 Kroll Jun 2002 B1
6460141 Olden Oct 2002 B1
7089241 Alspector et al. Aug 2006 B1
7373524 Motsinger et al. May 2008 B2
7506170 Finnegan Mar 2009 B2
7516220 Stiert Apr 2009 B1
7606915 Calinov et al. Oct 2009 B1
7657849 Chaudhri et al. Feb 2010 B2
7680891 Pongsajapan Mar 2010 B1
7721107 Golle et al. May 2010 B2
7785180 Von Ahn et al. Aug 2010 B1
7877707 Westerman et al. Jan 2011 B2
7895653 Calo et al. Feb 2011 B2
7917491 Sack Mar 2011 B1
7917508 Baluja et al. Mar 2011 B1
7945952 Behforooz May 2011 B1
7980953 von Ahn Arellano Jul 2011 B2
8032483 Haveliwala et al. Oct 2011 B1
8056129 Guslerll et al. Nov 2011 B2
8214891 Seacat Jul 2012 B2
8280993 Awadallah et al. Oct 2012 B2
8353764 Williams et al. Jan 2013 B2
8413250 Krynski Apr 2013 B1
8510795 Gargi Aug 2013 B1
8601538 Qvarfordt et al. Dec 2013 B2
8682718 Zwicky Mar 2014 B2
8684839 Mattice et al. Apr 2014 B2
8694244 Schalk Apr 2014 B2
8700259 Schalk Apr 2014 B2
8706405 Schalk Apr 2014 B2
8713657 Lee Apr 2014 B2
8738287 Schalk May 2014 B2
8775235 Hedley et al. Jul 2014 B2
8775236 Hedley et al. Jul 2014 B2
8793135 Lewis et al. Jul 2014 B2
8824659 Bushey et al. Sep 2014 B2
8825379 Schalk Sep 2014 B2
8838570 English Sep 2014 B1
8903052 Moore et al. Dec 2014 B2
8938395 Willner et al. Jan 2015 B2
9047458 Etchegoyen Jun 2015 B2
9088652 Bushey et al. Jul 2015 B2
9152381 Valentino et al. Oct 2015 B2
9208461 Busa Dec 2015 B2
9240078 Hedley et al. Jan 2016 B2
9595008 Heikell Mar 2017 B1
20050008148 Jacobson Jan 2005 A1
20050015257 Bronstein Jan 2005 A1
20050066201 Goodman et al. Mar 2005 A1
20050114705 Reshef May 2005 A1
20050144067 Farahat Jun 2005 A1
20050198158 Fabre et al. Sep 2005 A1
20050278253 Meek Dec 2005 A1
20060015390 Rijsinghani et al. Jan 2006 A1
20060136294 Linden et al. Jun 2006 A1
20070016689 Birch Jan 2007 A1
20070018393 Ritter et al. Jan 2007 A1
20070038568 Greene Feb 2007 A1
20070106737 Barnes et al. May 2007 A1
20070165911 Baentsch et al. Jul 2007 A1
20070192419 Vuong et al. Aug 2007 A1
20070192849 Golle et al. Aug 2007 A1
20070201745 Wang et al. Aug 2007 A1
20070239604 O'Connell Oct 2007 A1
20070240230 O'Connell et al. Oct 2007 A1
20070250920 Lindsay Oct 2007 A1
20070255818 Tanzer et al. Nov 2007 A1
20070255821 Ge et al. Nov 2007 A1
20070259716 Mattice et al. Nov 2007 A1
20070300077 Mani et al. Dec 2007 A1
20080040653 Levine Feb 2008 A1
20080049969 Koziol Feb 2008 A1
20080066014 Misra Mar 2008 A1
20080086524 Afergan Apr 2008 A1
20080091453 Meehan et al. Apr 2008 A1
20080114624 Kitts May 2008 A1
20080147456 Broder et al. Jun 2008 A1
20080162200 O'Sullivan et al. Jul 2008 A1
20080162227 Jakobsson et al. Jul 2008 A1
20080225870 Sundstrom Sep 2008 A1
20080127302 Qvarfordt et al. Oct 2008 A1
20080263636 Gusler Oct 2008 A1
20080281606 Kitts et al. Nov 2008 A1
20080288303 Gray et al. Nov 2008 A1
20080319774 O'Sullivan et al. Dec 2008 A1
20090024971 Willner et al. Jan 2009 A1
20090044264 Ramanathan et al. Feb 2009 A1
20090055193 Maislos et al. Feb 2009 A1
20090094311 Awadallah et al. Apr 2009 A1
20090113294 Sanghavi et al. Apr 2009 A1
20090150986 Foreman Jun 2009 A1
20090197815 Vincenzi et al. Aug 2009 A1
20090241174 Rajan et al. Sep 2009 A1
20090249477 Punera Oct 2009 A1
20090307028 Eldon et al. Dec 2009 A1
20090320123 Yu et al. Dec 2009 A1
20100037147 Champion et al. Feb 2010 A1
20100070620 Awadallah et al. Mar 2010 A1
20110029902 Bailey Feb 2011 A1
20110113388 Eisen et al. May 2011 A1
20130019290 Lee Jan 2013 A1
20150112892 Kaminsky Apr 2015 A1
Foreign Referenced Citations (2)
Number Date Country
2007213409 Aug 2007 JP
WO2007090605 Aug 2007 WO
Non-Patent Literature Citations (186)
Entry
Al-Qayedi, et al., “Combined Web/Mobile Authentication for Secure Web Access Control”, WCNC 2004, IEEE Communications Society, 2004, 5 pages.
“Fingerprint Reader: Replace Passwords with Your Fingerprint” retrieved on the internet on Aug. 16, 2018 at «https://web.archive.org/web/20061230123302/http:://www.microsoft.com/hardware/houseandkeyboard/productdetails.aspxZpid=036» 1 page.
McCausland et al., “A Proactice Risk-Awar Robitic Sensor Network for Critical Infrastructure Protection”, IEEE International Conference on Computational Intelligence and Virtual Environments for Measurement Systems and Applications (CIVEMSA), Apr. 2013, pp. 132-137.
“Microsoft Fingerprint Reader” retrieved from the internet on Aug. 16, 2018 at « https//en.wikipedia.org/wikiMicrosoft_Fingerprint_Reader», 2 pages.
“Microsoft Fingerprint Reader” retrieved from the internet on Aug. 16, 2018 at «https://web.archive.org/web/20060913000000/http://en.wikipedia.org:80/wikiMicrosoft_Fingerprint_Reader» 1 page.
“Most people have at least 15 username and password combinations to remember” retrieved from the internet on Aug. 16, 2018 at «https://web.archive.org/web/20050111041404/http://www.microsoft.com:80/hardware/mouseandkeyboard/features/docs/fingerprint.html » 12 pages.
Office action for U.S. Appl. No. 12/313,502, dated Sep. 16, 2011, Heikell, “Systems, Methods and Apparatus for Evaluating Status of Computing Device User”, 9 pages.
Office Action for U.S. Appl. No. 15/457,099, dated Sep. 25, 2018, Heikell, “Systems, Methods and Apparatus for Evaluating Status of Computing Device User”, 16 pages.
Pal et al., “Trust Assessment from Observed Behavior: Toward and Essential Service for Trusted Network Computing”, Fifth IEEE International Symposium on Network Computing and Applications (NCA'06), Jul. 2006, pp. 285-292.
“Researcher Hacks Microsoft Fingerprint Reader” retrieved from the internet on Aug. 16, 2018 at «https://www.pcworld.com/article/124978/article.html» 3 pages.
Temple et al., “Testability Modeling and Analysis of a Rocket Engine Test Stand”, IEEE Aerospace Conference, Jan. 2005, pp. 3874-3895.
“Tired of passwords? Replace them with your fingerprint” retrieved from the internet on Aug. 16, 2018 at «https://web.archive.org/web/20040926002804/http://www.microsoft.com:80hardware;mouseandkeyboard/features/fingerprint.mspx», 1 page.
Daniel Lopresti CV, dated Apr. 2022, 56 pages.
File History for U.S. Pat. No. 10,423,885 filed Mar. 13, 2017, 226 pages.
File History for U.S. Pat. No. 9,595,008 filed Nov. 19, 2008, 137 pages.
Google Prior Art Stipulation dated Apr. 29, 2022, 2 pages.
Initial Related Complaint from United States District Court, Western District of Texas, Waco Division, Case No. 6:21-cn1290, dated Dec. 20, 2021, 39 pages.
Declaration of Daniel P. Lopresti, Ph.D. for U.S. Pat. No. 9,595,008, dated Apr. 29, 2022, 120 pages.
Declaration of Daniel P. Lopresti, Ph.D. for U.S. Pat. No. 10,423,885, dated Apr. 29, 2022, 119 pages.
Petition for Inter Partes Review for U.S. Pat. No. 9,595,008 dated Apr. 29, 2022, 86 pages.
Petition for Inter Partes Review for U.S. Pat. No. 10,423,885 dated Apr. 29, 2022, 94 pages.
Nobots LLC v Google LLC, “Google's Preliminary Invalidity and Subject-Matter Ineligibility Contentions, Exhibit B-6 885”, Case No. 6:21-cv-1290-ADA, In the United States District Court for the Western District of Texas Waco Division, Ex. B-6—885 Claim Chart for Kitts I, pp. 1-150.
Nobots LLC v Google LLC, “Google's Preliminary Invalidity and Subject-Matter Ineligibility Contentions, Exhibit B-7 885”, Case No. 6:21-cv-1290-ADA, In the United States District Court for the Western District of Texas Waco Division, Ex. B-7—885 Claim Chart for Gusler I, pp. 1-142.
Nobots LLC v Google LLC, “Google's Preliminary Invalidity and Subject-Matter Ineligibility Contentions, Exhibit B-8”, Case No. 6:21-cv-1290-ADA, In the United States District Court for the Western District of Texas Waco Division, Ex. B-8—885 Claim Chart with Obviousness Combinations, pp. 1-239.
Nobots LLC v Google LLC, “Google's Preliminary Invalidity and Subject-Matter Ineligibility Contentions, Exhibit C-008”, Case No. 6:21-cv-1290-ADA, In the United States District Court for the Western District of Texas Waco Division, Ex. C—008 Claim Chart with Subject-Matter Eligibility Contentions, pp. 1-61.
Nobots LLC v Google LLC, “Google's Preliminary Invalidity and Subject-Matter Ineligibility Contentions, Exhibit D-885”, Case No. 6:21-cv-1290-ADA, In the United States District Court for the Western District of Texas Waco Division, Ex. D—885 Claim Chart with Subject-Matter Eligibility Contentions, pp. 1-53.
Flanagan, “JavaScript: The Definitive Guide”, vol. 1018, O'reilly, Aug. 2006, 1023 pgs.
Yeung, et al., “Detecting Cheaters for Multiplayer Games: Theory, Design and Implementation”, IEEE CCNC 2006 Proceedings, Jan. 8-10, 2006, pp. 1178-1182.
Gamboa, et al., “A behavioral biometric system based on human-computer interaction”, Proc. SPIE 5404, Biometric Technology for Human Identification, vol. 5404, Aug. 25, 2004, pp. 381-392.
Ghosemajumder, “Google, Click Fraud and Invalid Clicks”, accessed Apr. 6, 2022, at «http://shumans.com/articles/000044.php», Dec. 12, 2006, 3 pgs.
Golle, “Machine learning attacks against the Asirra CAPTCHA”, Proceedings of the 15th ACM conference on Computer and Communications Security, 2008, pp. 535-542.
Golle & Ducheneaut, “Preventing bots from playing online games”, ACM Computers in Entertainment, vol. 3, No. 3, Article 3C, Jul. 2005, 10 pgs.
Gossweiler, et al., “What's up CAPTCHA? A CAPTCHA Based on Image Orientation”, Proceedings of the 18th International Conference on World Wide Web, 2009, pp. 841-850.
Immorlica, et al., “Click Fraud Resistant Methods for Learning Click-Through Rates” In International Workshop on Internet and Network Economics, Springer, Berlin, Heidelberg, 2005, pp. 34-45.
Ives, “Web Marketers Fearful of Fraud in Pay-Per-Click”, The New York Times, 2005, 3 pgs.
Jain, et al., “Biometrics: Personal Identification in Networked Society”, Springer Science & Business Media, 1999, Part 1, 100 pgs.
Jain, et al., “Biometrics: Personal Identification in Networked Society”, Springer Science & Business Media, 1999, Part 2, 100 pgs.
Jain, et al., “Biometrics: Personal Identification in Networked Society”, Springer Science & Business Media, 1999, Part 3, 100 pgs.
Jain, et al., “Biometrics: Personal Identification in Networked Society”, Springer Science & Business Media, 1999, Part 4, pp. 111 pgs.
Jain, et al., “Handbook of biometrics” Springer Science & Business Media, 2007, Part 1, 138 pgs.
Jain, et al., “Handbook of biometrics” Springer Science & Business Media, 2007, Part 2, 138 pgs.
Jain, et al., “Handbook of biometrics” Springer Science & Business Media, 2007, Part 3, 74 pgs.
Jain, et al., “Handbook of biometrics” Springer Science & Business Media, 2007, Part 4, 100 pgs.
Jain, et al., “Handbook of biometrics” Springer Science & Business Media, 2007, Part 5, 100 pgs.
Jakobsson, “Crimeware: Understanding New Attacks and Defenses”, Addison-Wesley Professional, 2008, Part 1, 120 pgs.
Jakobsson, “Crimeware: Understanding New Attacks and Defenses”, Addison-Wesley Professional, 2008, Part 2, 120 pgs.
Jakobsson, “Crimeware: Understanding New Attacks and Defenses”, Addison-Wesley Professional, 2008, Part 3, 120 pgs.
Jakobsson, “Crimeware: Understanding New Attacks and Defenses”, Addison-Wesley Professional, 2008, Part 4, 120 pgs.
Jakobsson, “Crimeware: Understanding New Attacks and Defenses”, Addison-Wesley Professional, 2008, Part 5, 117 pgs.
Koch, “PPK on JavaScript”, New Riders, 2006, 530 pgs.
Kochanski, et al., “A Reverse Turing Test using speech”, Proceedings of the Seventh International Conference on Spoken Language Processing, Denver, Colorado, USA, Sep. 16-20, 2002, pp. 1357-1360.
Larsen, et al., “An introduction to mathematical statistics and its applications” 2018, Part 1, 120 pgs.
Larsen, et al., “An introduction to mathematical statistics and its applications” 2018, Part 2, 120 pgs.
Larsen, et al., “An introduction to mathematical statistics and its applications” 2018, Part 3, 120 pgs.
Larsen, et al., “An introduction to mathematical statistics and its applications” 2018, Part 4, 120 pgs.
Larsen, et al., “An introduction to mathematical statistics and its applications” 2018, Part 5, 120 pgs.
Larsen, et al., “An introduction to mathematical statistics and its applications” 2018, Part 6, 120 pgs.
Larsen, et al., “An introduction to mathematical statistics and its applications” 2018, Part 7, 120 pgs.
Larsen, et al., “An introduction to mathematical statistics and its applications” 2018, Part 8, 90 pgs.
Lopresti, “Leveraging the CAPTCHA Problem”, International Workshop on Human Interactive Proofs, Springer, Berlin, Heidelberg, May 2005, pp. 97-110.
Mann, “How click fraud could swallow the internet”, Wired Magazine 14, No. 1, Jan. 1, 2006, 4 pgs.
May, “Escape from CAPTCHA”, accessed Mar. 28, 2022, from the Internet Archive Wayback Machine at «https://web.archive.org/web/20050206064605/http://www.w3.org:80/2004/Talks/0319-csun-m3m/Overview-1.html», 25 pgs.
May, “Inaccessibility of CAPTCHA”, Alternatives to Visual Turing Tests on the Web, W3C Working Group Note, Nov. 23, 2005, 7pgs.
McConnell, “Code Complete 2nd edition”, Publishing House of Electronics Industry, Jan. 1, 2006, Part 1, 120 pgs.
McConnell, “Code Complete 2nd edition”, Publishing House of Electronics Industry, Jan. 1, 2006, Part 10, 74 pgs.
McConnell, “Code Complete 2nd edition”, Publishing House of Electronics Industry, Jan. 1, 2006, Part 2, 80 pgs.
McConnell, “Code Complete 2nd edition”, Publishing House of Electronics Industry, Jan. 1, 2006, Part 3, 100 pgs.
McConnell, “Code Complete 2nd edition”, Publishing House of Electronics Industry, Jan. 1, 2006, Part 4, 100 pgs.
McConnell, “Code Complete 2nd edition”, Publishing House of Electronics Industry, Jan. 1, 2006, Part 5, 100 pgs.
McConnell, “Code Complete 2nd edition”, Publishing House of Electronics Industry, Jan. 1, 2006, Part 6, 100 pgs.
McConnell, “Code Complete 2nd edition”, Publishing House of Electronics Industry, Jan. 1, 2006, Part 7, 66 pages.
McConnell, “Code Complete 2nd edition”, Publishing House of Electronics Industry, Jan. 1, 2006, Part 8, 100 pgs.
McConnell, “Code Complete 2nd edition”, Publishing House of Electronics Industry, Jan. 1, 2006, Part 9, 100 pgs.
Microsoft Corporation, Online Video for “Introducing Microsoft Fingerprint Reader,” captured Jan. 11, 2005, on The Internet Archive Wayback Machine at «https://web.archive.org/web/2005011104104/http://www.microsoft.com/hardware/mouseandkeyboard/features/docs/», 1 pg.
Moncur, “Sams Teach Yourself JavaScript in 24 Hours”, Sams, Switzerland, 2002, Part 1, 100 pgs.
Moncur, “Sams Teach Yourself JavaScript in 24 Hours”, Sams, Switzerland, 2002, Part 2, 100 pgs.
Moncur, “Sams Teach Yourself JavaScript in 24 Hours”, Sams, Switzerland, 2002, Part 3, 120 pgs.
Moncur, “Sams Teach Yourself JavaScript in 24 Hours”, Sams, Switzerland, 2002, Part 4, 125 pgs.
Nobots LLC v Google LLC, “Google's Preliminary Invalidity and Subject-Matter Ineligibility Contentions”, Case No. 6:21-cv-1290-ADA, In the United States District Court for the Western District of Texas Waco Division, pp. 1-111.
Office Action for U.S. Appl. No. 17/874,137, dated Oct. 28, 2022, Heikell, “Systems, Methods and Apparatus for Evaluating Status of Computing Device User”, 12 pgs.
Office Action for U.S. Appl. No. 17/882,082, dated Oct. 28, 2022, Heikell, “Systems, Methods and Apparatus for Evaluating Status of Computing Device User”, 12 pgs.
Oli, “The Cutest Human-Test: KittenAuth”, retrieved at «https://thepcspy.com/read/the-cutest-humantest-kittenauth/», Apr. 6, 2006, 29 pgs.
Park, et al., “Securing Web Service by Automatic Robot Detection” in USENIX Annual Technical Conference, General Track, 2006, pp. 255-260.
Penenberg, “BlowSearch Tackles Click Fraud”, accessed on Apr. 6, 2022, at «http://www.wired.com/2005/06/blowsearch-tackles-click-fraud/», Wired, Jun. 18, 2005, 2 pgs.
Piccardi, “Background subtraction techniques: a review”, 2004 IEEE International Conference on Systems, Man and Cybernetics, 2004, p. 3099-3104.
Pilgrim, “Greasemonkey Hacks: Tips & Tools for Remixing the Web with Firefox”, O'Reilly Media, Inc., 2005, 498 pgs.
Powell, et al., “JavaScript:The Complete Reference”, McGraw-Hill/Osborne, Inc, 2004, Part 1, 120 pgs.
Powell, et al., “JavaScript:The Complete Reference”, McGraw-Hill/Osborne, Inc, 2004, Part 2, 120 pgs.
Powell, et al., “JavaScript:The Complete Reference”, McGraw-Hill/Osborne, Inc, 2004, Part 3, 120 pgs.
Powell, et al., “JavaScript:The Complete Reference”, McGraw-Hill/Osborne, Inc, 2004, Part 4, 120 pgs.
Powell, et al., “JavaScript:The Complete Reference”, McGraw-Hill/Osborne, Inc, 2004, Part 5, 120 pgs.
Powell, et al., “JavaScript:The Complete Reference”, McGraw-Hill/Osborne, Inc, 2004, Part 6, 120 pgs.
Powell, et al., “JavaScript:The Complete Reference”, McGraw-Hill/Osborne, Inc, 2004, Part 7, 120 pgs.
Powell, et al., “JavaScript:The Complete Reference”, McGraw-Hill/Osborne, Inc, 2004, Part 8, 138 pgs.
Prevent PPC Click Fraud—TrafficSentry.com, “Do You Advertise on Any of the Pay-Per Click Networks or Search Engines,” accessed Apr. 29, 2022 on The Wayback Machine, «https://web.archive.org/web/20070118082529/http://clickhawk.com.80/clickhawk/», 14 pgs.
“ReCAPTCHA: Using Captchas to Digitize Books”, accessed on Jan. 18, 2022, at «https://techcrunch.com/2007/09/16/recaptcha-using-captchas-to-digitize-books/», TechCrunch, Sep. 16, 2007, 10 pgs.
Ross, et al., “Human recognition using biometrics: an overview” In Annales Des Télécommunications, vol. 62, No. 1, Springer-Verlag, Jan. 2007, pp. 11-35.
Rui, et al., “Characters or faces: A User Study on Ease of Use for HIPs” International Workshop on Human Interactive Proofs, Springer, Berlin, Heidelberg, 2005, pp. 53-65.
Rusu, “Exploiting the gap in human and machine abilities in handwriting recognition for web security applications”, University of New York at Buffalo, Aug. 2007, 141 pgs.
Rusu, et al., “Visual CAPTCHA with handwritten image analysis”, International Workshop on Human Interactive Proofs, Springer, Berlin, Heidelberg, 2005, pp. 42-52.
Saygin, et al., “Turing Test: 50 Years Later”, Minds and Machines, vol. 10, 2000, pp. 463-518.
Shirali-Shahreza, et al., “CAPTCHA for Blind People”, 2007 IEEE International Symposium on Signal Processing and Information Technology, 2007, pp. 995-998.
Shirali-Shahreza, et al., “CAPTCHA for Children,” 2008 IEEE International Conference on System of Systems Engineering, IEEE, 2008, 7 pgs.
Shirali-Shahreza, et al., “Bibliography of works done on CAPTCHA”, 2008 3rd International Conference on Intelligent System and Knowledge Engineering, vol. 1, 2008, pp. 205-210.
Silva, et al., “Ka-CAPTCHA: An Opportunity for Knowledge Acquisition on the Web”, in AAAI, 2007, pp. 1322-1327.
Sion, et al., “On-the-fly intrusion detection for web portals” In Proceedings ITCC 2003. International Conference on Information Technology: Coding and Computing, IEEE, Apr. 28-30, 2003, pp. 325-330.
Sylvain, et al., “Users Authentication by a study of human computer interactions”, Universite Fran, ois-Rabelais, Tours—France, May 18, 2004, 6 pgs.
Tam, et al., “Breaking Audio CAPTCHAs”, Advances in Neural Information Processing Systems 21, Dec. 8, 2008, pp. 1-8.
Tan, et al., “Discovery of Web Robot Sessions Based on their Navigtional Patterns”, Intelligent Technologies for Information Analysis, Springer-Verlang, Berlin, Heidelberg, 2004, 30 pgs.
Thau, “The Book of JavaScript, 2nd Edition: A Practical Guide to Interactive Web Pages”, No Starch Press, 2006, Part 1, 120 pgs.
Thau, “The Book of JavaScript, 2nd Edition: A Practical Guide to Interactive Web Pages”, No Starch Press, 2006, Part 2, 120 pgs.
Thau, “The Book of JavaScript, 2nd Edition: A Practical Guide to Interactive Web Pages”, No Starch Press, 2006, Part 3, 120 pgs.
Thau, “The Book of JavaScript, 2nd Edition: A Practical Guide to Interactive Web Pages”, No Starch Press, 2006, Part 4, 169 pgs.
Thompson, “For Certain Tasks, the Cortex Still Beats the CPU”, Wired, Jun. 2007, 11 pgs.
Tuzhilin, “The lane's gifts v. google report”, Official Google Blog: Findings on invalid clicks, posted 2006, 47 pgs.
Vincent, “JavaScript Developer's Dictionary”, Sams, United Kingdom, 2002, Part 1, 120 pgs.
Vincent, “JavaScript Developer's Dictionary”, Sams, United Kingdom, 2002, Part 10, 95 pgs.
Vincent, “JavaScript Developer's Dictionary”, Sams, United Kingdom, 2002, Part 2, 120 pgs.
Vincent, “JavaScript Developer's Dictionary”, Sams, United Kingdom, 2002, Part 3, 120 pgs.
Vincent, “JavaScript Developer's Dictionary”, Sams, United Kingdom, 2002, Part 4, 120 pgs.
Vincent, “JavaScript Developer's Dictionary”, Sams, United Kingdom, 2002, Part 5, 120 pgs.
Vincent, “JavaScript Developer's Dictionary”, Sams, United Kingdom, 2002, Part 6, 120 pgs.
Vincent, “JavaScript Developer's Dictionary”, Sams, United Kingdom, 2002, Part 7, 120 pgs.
Vincent, “JavaScript Developer's Dictionary”, Sams, United Kingdom, 2002, Part 8, 120 pgs.
Vincent, “JavaScript Developer's Dictionary”, Sams, United Kingdom, 2002, Part 9, 120 pgs.
Von Ahn, et al., “Telling Humans and Computers Apart (Automatically) or How Lazy Cryptographers Do AI,” Computer Science Department, Carnegie Mellon University, 2002, 11 pgs.
Walker, “In Game of Click and Mouse, Advertisers Come Up Empty”, accessed on Apr. 6, 2022, at «https://www.netmagellan.com/in-game-of-click-and-mouse-advertisers-come-up-empty-279.html», Net Mahellan, Mar. 16, 2006, 2 pgs.
Yampolskiy, et al.,“Embedded Noninteractive Continuous Bot Detection”, ACM Computers in Entertainment, vol. 5, No. 4, Article 7, Mar. 2008, 11 pgs.
Yan, “Bot, Cyborg and Automated Turing Test” International Workshop on Security Protocols, Lecture Notes in Computer Science, vol. 5087, Springer, Berlin, Heidelberg, Jun. 2006, pp. 190-197 (Abstract).
Yan, Jeff, “Bot, Cyborg and Automated Turing Test: (Or Putting the Humanoid in the Protocol),” Security Protocols: 14th International Workshop, Cambridge, UK, 2006, Revised Selected Papers 14, Springer, 2009, 15 pgs.
Zakas, “Professional JavaScript for Web Developers” Wiley Publishing, Inc., 2009, Part 1, 120 pgs.
Zakas, “Professional JavaScript for Web Developers” Wiley Publishing, Inc., 2009, Part 2, 120 pgs.
Zakas, “Professional JavaScript for Web Developers” Wiley Publishing, Inc., 2009, Part 3, 120 pgs.
Zakas, “Professional JavaScript for Web Developers” Wiley Publishing, Inc., 2009, Part 4 120 pgs.
Zakas, “Professional JavaScript for Web Developers” Wiley Publishing, Inc., 2009, Part 5, 120 pgs.
Zakas, “Professional JavaScript for Web Developers” Wiley Publishing, Inc., 2009, Part 6, 60 pgs.
Ahn, et al., “CAPTCHA: Using Hard AI Problems for Security”, Lecture Notes in Computer Science, vol. LNCS 2656, 2003, pp. 294-331.
Ahn, et al., “Human computation” Synthesis lectures on artificial intelligence and machine learning, vol. 5, No. 3, Jun. 2011, pp. 1-121.
Ahn, et al., “Labeling Images with a Computer Game”, Association for Computing Machinery, New York, Apr. 24-29, 2004, pp. 319-326.
Ahn, et al., “reCAPTCHA: Human-Based Character Recognition via Web Security Measures”, Science 321, No. 5895, Sep. 12, 2008, pp. 1465-1468.
Ahn, et al., “Telling Humans and Computers Apart (Automatically)”, accessed on Mar. 28, 2022, «https://web.archive.or /web/20070104074513/htt ://www.ca tcha.net:80/», Carnegie Mellon, 2004.
Al-Qayed, et al., “Secure Centralised Mobile and Web-based control system Using GPRS with J2ME”, ICECS2003 Sharjah U.A.E 2003, 5 pgs.
Andersen, et al., “Using alert levels to enhance keystroke dynamic authentication”, Nov. 19-21, 2007, 9 pgs.
Athanasopoulos, et al., “Enhanced CAPTCHAs: Using animation to tell humans and computers apart” in IFIP International Conference on Communications and Multimedia Security, Springer, Berlin, Heidelberg, CMS 2006, LNCS 4237, Oct. 2006, pp. 97-108.
Baird & Lopresti, “Human Interactive Proofs: Second International Workshop,” HIP 2005, Bethlehem, PA, USA, Proceedings, vol. 3517, May 19-20, 2005, Springer.
Baird, et al., “ScatterType: A Legible but Hard-to-Segment CAPTCHA,” Eighth International Conference on Document Analysis and Recognition (ICDAR'05). IEEE, Korea, 2005, 6 pgs.
Chew & Baird, “BaffleText: a human interactive proof,” Proc. SPIE 5010, Document Recognition and Retrieval X, vol. 5010, Jan. 13, 2003, pp. 305-316.
Baird, et al., “Implicit captchas” In IS&T/SPIE Document Recognition and Retrieval XII, vol. 5676, Jan. 16-20, 2006, pp. 191-196.
Baker, “Click Fraud and How to Stop It”, accessed on Dec. 7, 2004 at «https://www.searchenginejournal.com/click-fraud-and-how-to-stop-it/1123/#close», SearchEngine Journal, 9 pgs.
Bannan, “Click Fraud is Growing on the Web”, accessed on Apr. 6, 2022, at «https://www.nytimes.com/2006/09/23/technology/23click.html», The New York Times, Sep. 23, 2006, 3 pgs.
Barras, “Robot Asimo can understand three voices at once”, retrieved at «https://www.newscientist.com/article/dn14105-robot-asimo-can-understand-three-voices-at-once/#:˜:text=Advanced%20humanoid%20robot%20Asimo%20just,out%20their%20choices%20at%20once», NewScientist, Jun. 10, 2008, 5 pgs.
BotBarrier, “Turn any digital image into a clickable CAPTCHAI”, retrieved Nov. 16, 2022, from the Internet Archive Wayback Machine at «https://web.archive.org/web/20080621051833/htp://ww.botbarrier.com:80/, BotBarrier», 1 pg.
BotBarrier; “How effective is BotBarrier?” retrieved Nov. 16, 2022, from the Internet Archive Wayback Machine at «https://web.archive.org/web/20080322231602/http://www.botbarrier.com:80/faq.html», 1 pg.
Chen, et al., “Game bot detection based on avatar trajectory”, International Conference on Entertainment Computing, Springer, Berlin, Heidelberg, 2008, pp. 94-105.
Chen, et al., “Identifying MMORPG bots: A traffic analysis approach” Proceedings of the 2006 ACM SIGCHI International Conference on Advances in Computer Entertainment Technology, Jun. 2006, pp. 1-13.
Chew, et al., “Image Recognition CAPTCHAs” In Zhang, K., Zheng, Y. (eds) Information Security, ISC 2004, Lecture Notes in Computer Science, vol. 3225, Springer, Berlin, Heidelberg, 2004, 19 pgs.
Chow, et al., “Making CAPTCHAs clickable”, Proceedings of the 9th workshop on Mobile computing systems and applications, 2008, pp. 91-94.
Chu, et al., “A Hybrid Training Mechanism for Applying Neural Networks to Web-based Applications,” 2004 IEEE International Conference on Systems, Man and Cybernetics (IEEE Cat. No. 04CH37583), vol. 4, IEEE, 2004, pp. 3543-3545.
Click Tracking Software—TrafficSentry.com, “Click Hawk Click Fraud Detection Software”, retrieved Nov. 18, 2022, from the Internet Archive Wayback Machine at «https://web.archive.org/web/20220127223730/http://www.clickhawk.com/», 2 pgs.
Coates, “Pessimal Print: A Reverse Turing Test”, IJDAR 5, 2003, pp. 158-163.
Codeen: “A Content Distribution Network for PlanetLab”, accessed Apr. 27, 2022, from the Internet Archive Wayback Machine at «https://web.archive.org/web/20070210195838/https://codeen.cs.princeton.edu;80», PlanetLab, 2 pgs.
Datta, et al., “Imagination: a robust image-based CAPTCHA generation system” In Proceedings of the 13th annual ACM international conference on Multimedia, 2005, pp. 331-334.
RPX Insight, “Dossier: Human Differential Intelligence, LLC”, RPX Corporation, Aug. 7, 2020, pp. 1-24.
Edwards, et al., “The JavaScript Anthology: 101 Essential Tips, Tricks & Hacks.”, Sitepoint, 2006, Part 1, 170 pgs.
Edwards, et al., “The JavaScript Anthology: 101 Essential Tips, Tricks & Hacks.”, Sitepoint, 2006, Part 2, 101 pgs.
Edwards, et al., “The JavaScript Anthology: 101 Essential Tips, Tricks & Hacks.”, Sitepoint, 2006, Part 3, 101 pgs.
Edwards, et al., “The JavaScript Anthology: 101 Essential Tips, Tricks & Hacks.”, Sitepoint, 2006, Part 4, 128 pgs.
Edwards, et al., “The JavaScript Anthology: 101 Essential Tips, Tricks & Hacks.”, Sitepoint, 2006, Part 5, 123 pgs.
Elson, et al,. “Asirra: A CAPTCHA that exploits interest-aligned manual image categorization”, CCS 7, Oct. 29-Nov. 2, 2007 pp. 366-374.
Nobots LLC v Google LLC, “Google's Preliminary Invalidity and Subject-Matter Ineligibility Contentions, Exhibit A-1”, Case No. 6:21-cv-1290-ADA, In the United States District Court for the Western District of Texas Waco Division, Ex. A-1—008 Claim Chart for Willner, pp. 1-152.
Nobots LLC v Google LLC, “Google's Preliminary Invalidity and Subject-Matter Ineligibility Contentions, Exhibit A-10”, Case No. 6:21-cv-1290-ADA, In the United States District Court for the Western District of Texas Waco Division, Ex. A-10—008 Claim Chart for Tanzer, pp. 1-211.
Nobots LLC v Google LLC, “Google's Preliminary Invalidity and Subject-Matter Ineligibility Contentions, Exhibit A-11-008”, Case No. 6:21-cv-1290-ADA, In the United States District Court for the Western District of Texas Waco Division, Ex. A-11—008 Claim Chart for Eldon I, pp. 1-155.
Nobots LLC v Google LLC, “Google's Preliminary Invalidity and Subject-Matter Ineligibility Contentions, Exhibit A-12-008”, Case No. 6:21-cv-1290-ADA, In the United States District Court for the Western District of Texas Waco Division, Ex. A-12—008 Claim Chart with Obviousness Combinations, pp. 1-590.
Nobots LLC v Google LLC, “Google's Preliminary Invalidity and Subject-Matter Ineligibility Contentions, Exhibit A-2-008”, Case No. 6:21-cv-1290-ADA, In the United States District Court for the Western District of Texas Waco Division, Ex. A-2—008 Claim Chart for Ge, pp. 1-171.
Nobots LLC v Google LLC, “Google's Preliminary Invalidity and Subject-Matter Ineligibility Contentions, Exhibit A-3-008”, Case No. 6:21-cv-1290-ADA, In the United States District Court for the Western District of Texas Waco Division, Ex. A-3—008 Claim Chart for O'Sullivan I, pp. 1-207.
Nobots LLC v Google LLC, “Google's Preliminary Invalidity and Subject-Matter Ineligibility Contentions, Exhibit A-4 008”, Case No. 6:21-cv-1290-ADA, In the United States District Court for the Western District of Texas Waco Division, Ex. A-4—008 Claim Chart for O'Connell I, pp. 1-154.
Nobots LLC v Google LLC, “Google's Preliminary Invalidity and Subject-Matter Ineligibility Contentions, Exhibit A-5 008”, Case No. 6:21-cv-1290-ADA, In the United States District Court for the Western District of Texas Waco Division, Ex. A-5—008 Claim Chart for Krynski, pp. 1-178.
Nobots LLC v Google LLC, “Google's Preliminary Invalidity and Subject-Matter Ineligibility Contentions, Exhibit A-6 008”, Case No. 6:21-cv-1290-ADA, In the United States District Court for the Western District of Texas Waco Division, Ex. A-6—008 Claim Chart for Farahat, pp. 1-144.
Nobots LLC v Google LLC, “Google's Preliminary Invalidity and Subject-Matter Ineligibility Contentions, Exhibit A-7”, Case No. 6:21-cv-1290-ADA, In the United States District Court for the Western District of Texas Waco Division, Ex. A-7—008 Claim Chart for Awadallah I, pp. 1-189.
Nobots LLC v Google LLC, “Google's Preliminary Invalidity and Subject-Matter Ineligibility Contentions, Exhibit A-8 008”, Case No. 6:21-cv-1290-ADA, In the United States District Court for the Western District of Texas Waco Division, Ex. A-8—008 Claim Chart for Kitts I, pp. 1-139.
Nobots LLC v Google LLC, “Google's Preliminary Invalidity and Subject-Matter Ineligibility Contentions, Exhibit A-9 008”, Case No. 6:21-cv-1290-ADA, In the United States District Court for the Western District of Texas Waco Division, Ex. A-9—008 Claim Chart for Meek, pp. 1-143.
Nobots LLC v Google LLC, “Google's Preliminary Invalidity and Subject-Matter Ineligibility Contentions, Exhibit B-1-885”, Case No. 6:21-cv-1290-ADA, In the United States District Court for the Western District of Texas Waco Division, Ex. B-1—885 Claim Chart for Willner I, pp. 1-140.
Nobots LLC v Google LLC, “Google's Preliminary Invalidity and Subject-Matter Ineligibility Contentions, Exhibit B-2 885”, Case No. 6:21-cv-1290-ADA, In the United States District Court for the Western District of Texas Waco Division, Ex. B-2—885 Claim Chart for Ge, pp. 1-144.
Nobots LLC v Google LLC, “Google's Preliminary Invalidity and Subject-Matter Ineligibility Contentions, Exhibit B-3”, Case No. 6:21-cv-1290-ADA, In the United States District Court for the Western District of Texas Waco Division, Ex. B-3—885 Claim Chart for O'Sullivan I, pp. 1-219.
Nobots LLC v Google LLC, “Google's Preliminary Invalidity and Subject-Matter Ineligibility Contentions, Exhibit B-4 885”, Case No. 6:21-cv-1290-ADA, In the United States District Court for the Western District of Texas Waco Division, Ex. B-4—885 Claim Chart for Meek, pp. 1-157.
Nobots LLC v Google LLC, “Google's Preliminary Invalidity and Subject-Matter Ineligibility Contentions, Exhibit B-5-885”, Case No. 6:21-cv-1290-ADA, In the United States District Court for the Western District of Texas Waco Division, Ex. B-5—885 Claim Chart for Tanzer, pp. 1-163.
Related Publications (1)
Number Date Country
20200118021 A1 Apr 2020 US
Provisional Applications (1)
Number Date Country
61003743 Nov 2007 US
Divisions (1)
Number Date Country
Parent 15457099 Mar 2017 US
Child 16578823 US
Continuations (1)
Number Date Country
Parent 12313502 Nov 2008 US
Child 15457099 US