This disclosure relates generally to document processing workflows, and more specifically to workflows that enable documents to be distributed, assented to, and otherwise interacted with on an aural and/or oral basis.
Computers and electronic documents have become an increasingly indispensable part of modern life. In particular, as virtual storage containers for binary data, electronic documents have gained acceptance not only as a convenient replacement for conventional paper documents, but also as a useful way to store a wide variety of digital assets such as webpages, sound recordings, and videos. The increased use of electronic documents has resulted in the adaptation of conventional paper-based document processing workflows to the electronic realm. One such adaptation has been the increased use and acceptance of electronic signatures on agreements, contracts, and other documents. When negotiating parties reach an agreement with respect to a course of action, state of affairs, or other subject matter, the resulting agreement is usually reduced to writing and executed by the parties as a way to memorialize the terms of the agreement. Traditionally, a physical copy of the agreement was executed with a personalized stamp, seal, or handwritten signature. However, since this “reduction to writing” now often takes the form of an electronic document stored on a computer readable medium, electronic signatures have become commonplace and have indeed gained widespread legal recognition. Even where an agreement is never actually reduced to writing, the resulting “oral contract” may still be enforceable if evidentiary questions as to the substance of the underlying agreement can be resolved. The wide variety of different formats and legal requirements relating to agreements has resulted in a correspondingly wide variety of workflows—both conventional and electronic—that facilitate the negotiation, formation, execution, and management of agreements, contracts, and other documents.
While many efficiencies and benefits have been derived from the implementation of workflows based on electronic signatures, such workflows still suffer from several shortcomings and disadvantages. For example, many people lack the ability to read visual information on a computer screen. This could be caused by a physical disability, a lack of literacy in the language used to convey the content, or some other reason. Screen readers and text-to-speech software may be used to convert a textual document into audible speech, but such tools have limited utility if the document content is available only as a bitmap image or if the content cannot be decoded properly, such as due to the presence of foreign language content. Moreover, software associated with screen readers and text-to-speech tools is often expensive, tends to consume substantial computational resources, and usually needs be installed on every terminal used by a particular end user. In addition, just as many people lack the ability to read an electronic document, many people lack the ability to execute an electronic signature. This may be caused by a physical disability, a lack of access to or familiarity with appropriate computer resources, or some other reason. Voice recognition software may address some of these problems in some circumstances, but it also can suffer from the same shortcomings associated with screen readers and text-to-speech tools. Voice recognition software also often requires training to a particular individual's vocal patterns. These challenges represent substantial obstacles to the further deployment of electronic document workflows, particularly with respect to document recipients who have limited access to computer resources.
Thus, and in accordance with certain embodiments of the present invention, workflows are provided herein that enable documents to be distributed, assented to, and otherwise interacted with on an aural and/or oral basis. Certain of the workflows disclosed herein can be implemented in a way that allows a recipient to receive, understand, and interact with a document using, for example, conventional components such as the microphone and speaker provided by a telephone. For instance, in one embodiment a document originator may send a document to a recipient with a request for an electronic signature. The document may include a link to a networked electronic signature server, and optionally, an audio version of the document terms. Following the link allows the recipient to access functionality provided by the electronic signature server, which can be configured to interact with the recipient using voice prompts and/or spoken commands. This can allow the recipient to, for example, listen to the audio version of the document terms and record an electronic signature that represents assent to such terms. The electronic signature server can record the recipient's electronic signature and incorporate it into the document, such that it forms part of the electronic document just as a traditional handwritten signature forms part of a signed paper document. This advantageously allows the recipient to interact with the document aurally and orally without having access to specialized computer hardware and/or software. This also eliminates any need for the recipient to print, manually sign, and send or otherwise further process a paper document. The resulting signed electronic document can be processed according to a wide variety of existing or subsequently developed electronic document processing workflows. The electronic signature server can be used by a large number of document originators and recipients using a variety of devices having various capabilities, including devices such as public kiosks, smartphones, and tablet computers. Numerous configurations and variations of such embodiments will be apparent in light of this disclosure.
As used herein, the term “document” refers, in addition to its ordinary meaning, to any collection of information that can be communicated between users of the various systems disclosed herein. As used herein, the term “document terms” refers, in addition to its ordinary meaning, to content provided within, or accessible via, a document. A document can take the form of a physical object, such as one or more papers containing printed information, or in the case of an “electronic document”, a computer readable medium containing digital data. Electronic documents can be rendered in a variety of different ways, such as via display on a screen, by printing using an output device, or aurally using an audio player and/or text-to-speech software. Thus it will be appreciated that electronic documents may include digital assets in addition to or instead of text; such digital assets may include, for example, audio clips, video clips, photographs, and other multimedia assets. Documents may encompass a virtually unlimited range of subject matter, including documents that contain terms that are to be agreed to among various participants in a given workflow. Examples of such documents include agreements, settlements, and legally binding contracts. For instance, both a word processing file containing the terms of a legally enforceable contract as well as a compressed audio file containing an audio recording of the same contract terms would both be considered “documents” for the purposes of this disclosure. Such textual and audio components may be combined into a single “document” in certain embodiments. Documents may be communicated amongst users by a variety of techniques ranging from physically moving papers containing printed matter to wired and/or wireless transmission of digital data.
As used herein, the term “document originator” (or “originator”) refers, in addition to its ordinary meaning, to a user or entity that represents the source of a document in a workflow. Likewise, the term “document recipient” (or “recipient”) refers, in addition to its ordinary meaning, to a user or entity that represents the target of a document in a workflow. Thus, in a generalized workflow, a document originator can be understood as sending a document to a document recipient. It will be appreciated that a document originator may not necessarily be the creator, author, or generator of a particular document, but rather may simply be a user or entity that initiates a workflow by sending a document to a recipient. Likewise, the document recipient may not be the ultimate recipient of a document, particularly where a document is routed amongst multiple users in a given workflow. Thus, a single user or entity may act as both a document originator and a document recipient in different contexts. It will also be appreciated that the terms document originator and document recipient are not limited to people or users, but may also refer to entities, organizations, or workstations which originate or receive documents as part of a workflow. Finally, a given workflow may not necessarily involve the document itself being transmitted from document originator to document recipient; in some cases other data relating to a document, such as metadata and/or a network address, may be transmitted between a document originator and a document recipient.
As used herein, the term “electronic signature” refers, in addition to its ordinary meaning, to data that can be attached to or logically associated with an electronic document. Thus an electronic signature may comprise, for example, a string of characters, a bitmap image such as an image of a handwritten signature, an audio recording of a person saying a spoken phrase such as “I agree to these terms,” or a digital key. Electronic signatures may or may not be encrypted or otherwise encoded in a way that limits access and/or modification by unauthorized parities. An electronic signature may be personalized and associated with a particular individual, or may be generated automatically in response to a specified user input, such as the selection of an electronic checkbox, the clicking of a button on a graphical user interface, or the generation of a touch-tone using a telephone keypad. It will be appreciated that an electronic signature need not necessarily be incorporated into a particular electronic document, but may simply be stored in a resource managed by, for example, an electronic signature server, which can then create a logical association between the electronic signature and a particular electronic document. Where an electronic signature is encoded using binary digits, it may also be referred to as a “digital signature”. Examples of products which provide services associated with an electronic signature server include Adobe Echosign (Adobe Systems Incorporated, San Jose, Calif.) and DocuSign eSignature (DocuSign, Inc., San Francisco, Calif.).
System Architecture
As illustrated in
In certain embodiments device 110 provides functionality that enables document originator 100 to generate a new document, modify an existing document, or retrieve an existing document from a storage device, such as a local storage device, networked document repository 500, or a storage resource hosted by electronic signature server 300. Documents may also be received from other users as part of a larger overarching workflow. For example, in one embodiment device 110 can be used to draft a new bill of sale for an automobile that document originator 100 wishes to sell. In another embodiment a contract provider can send an email to document originator 100 containing standard terms for an automobile bill of sale which originator 100 can then modify to conform to the particular requirements of his/her desired application. In any case, microphone 112 enables document originator 100 to generate an audio version of the terms of the document, which may include either a word-for-word transcription of the document or an audio summary of selected terms. Thus document originator 100 may use device 110 to generate a document that includes both textual and audio components. The textual and audio components of the document may be stored together as a single document or may be stored separately but connected by a logical association such as a network link.
Still referring to
In certain embodiments device 210 provides functionality that enables document recipient 200 to interact with and respond to a received document aurally and/or orally. For example, upon receiving a document containing both textual and audio components as described herein, speaker 214 can be used to playback the audio component of the received document. Speaker 214 can also be used to play voice prompts that are generated by electronic signature server 300. Recipient 200 can use microphone 212 to respond to such voice prompts. Thus, after listening to the audio component of the received document, recipient 200 can record an appropriate response such as a spoken electronic signature. Specifically, a spoken phrase by the recipient, such as “I agree to these terms,” can be recorded, digitized and incorporated into and stored together with the received document. Electronic signature server 300 can also be configured to record and/or respond appropriately to other spoken commands, such as “I do not agreed to these terms,” or “Forward this document to John Doe”. In certain embodiments one or more prerecorded responses 218 can be stored on the recipient's device 210 and applied to a received document in accordance with a command provided by document recipient 200. This may be particularly useful where recipient 200 must frequently select an appropriate response to a received document from amongst a set of frequently used responses. Regardless of whether the document recipient 200 responds with a prerecorded response or otherwise, the audio response can be incorporated into and stored together with the received document. The resulting modified document can be further processed according to a pre-established workflow.
Referring still to the example embodiment illustrated in
In certain embodiments electronic signature server 300 includes an interactivity module 350 configured to provide an interface to users accessing the workflows and resources managed by electronic signature server 300. Such an interface may be provided by way of a graphical user interface rendered on a digital display, although other types of interfaces, such as voice response, touch-tone, or textual interfaces, can be implemented as well. The user interfaces can be provided to one or more document originators 100 and/or one or more document recipients 200. For example, in one embodiment interactivity module 350 is configured to generate a graphical user interface capable of receiving commands, parameters, and/or other metadata that define a workflow from document originator 100. Such parameters may specify, for example, how a particular document is to be routed amongst one or more document recipients 200 and how electronic signature server 300 should respond to various interactions between a particular document recipient 200 and the particular document. Likewise, interactivity module 350 can also be configured to generate a user interface capable of guiding a document recipient 200 through the process of receiving, reviewing, electronically signing (or declining to electronically sign), and/or otherwise interacting with a document. Thus in certain embodiments interactivity module 350 is capable of providing audible voice prompts to, and responding to spoken commands from, document recipient 200. Additional or alternative workflow aspects may be specified in other embodiments, and thus it will be appreciated that the present invention is not intended to be limited to any particular functionality provided by interactivity module 350.
As illustrated in
Certain embodiments of the system illustrated in
Another example of supplementary services provided in certain embodiments are authentication services 700. Authentication services 700 can be configured to authenticate document originators 100 and/or document recipients 200 before providing access to resources associated with electronic signature server 300, before accepting an electronic signature, or before enabling other functionalities. Authentication can be provided by any appropriate existing or subsequently-developed authentication scheme. For example, in certain embodiments document recipient 200 can be required to provide a password, public key, private key, or other authentication token before being authorized to apply an electronic signature to, or otherwise respond to, a received document. In other embodiments the authentication token provided by document recipient 200 takes the form of a voiceprint extracted from a spoken electronic signature itself. If the extracted voiceprint matches or substantially matches a voiceprint saved in a voiceprint repository 710, then the electronic signature can be considered to be authentic. The voiceprints can be considered to be substantially matching where there exists a reasonably high likelihood that the voiceprints where generated by the same person. The voiceprint saved in voiceprint repository 710 can be provided as part of an initial registration process completed by document recipient 200. It will be appreciated that the authentication procedures disclosed herein are optional, and that such procedures may be omitted entirely in some embodiments. In other embodiments authentication procedures can be applied to individual documents, such that document originator 100 may specify a password or spoken passphrase for a particular document that must be provided by document recipient 200 before the document is allowed to be signed or otherwise responded to. This provides document originator 100 with some assurance that the document has not been accessed, executed or otherwise interacted with by an unauthorized party.
Document originator 100, document recipient 200, and electronic signature server 300 can communicate with each other via network 400. Network 400 can also be used to access supplementary services and resources, such as networked document repository 500, transcription services 600, and authentication services 700. Network 400 may be a local area network (such as a home-based or office network), a wide area network (such as the Internet), or a combination of such networks, whether public, private, or both. For example, in certain embodiments at least a portion of the functionality associated with network 400 can be provided by a PSTN, thereby allowing a user of a conventional telephone to interact with electronic signature server 300. In general, communications amongst the various entities, resources, and services described herein may occur via wired and/or wireless connections, such as may be provided by Wi-Fi or mobile data networks. In some cases access to resources on a given network or computing system may require credentials such as usernames, passwords, and/or compliance with any other suitable security mechanism. Furthermore, while only one document originator 100 and one document recipient are illustrated in the example embodiment of
The various embodiments disclosed herein can be implemented in various forms of hardware, software, firmware, and/or special purpose processors. For example, in one embodiment a non-transitory computer readable medium has instructions encoded thereon that, when executed by one or more processors, cause one or more of the document distribution and interaction methodologies disclosed herein to be implemented. The instructions can be encoded using any suitable programming language, such as C, C++, object-oriented C, JavaScript, Visual Basic .NET, BASIC, or alternatively, using custom or proprietary instruction sets. Such instructions can be provided in the form of one or more computer software applications and/or applets that are tangibly embodied on a memory device, and that can be executed by a computer having any suitable architecture. In one embodiment, the system can be hosted on a given website and implemented, for example, using JavaScript or another suitable browser-based technology.
The functionalities disclosed herein can optionally be incorporated into other software applications, such as document management systems or document viewers. For example, an application configured to view portable document format (PDF) files can be configured to implement certain of the functionalities disclosed herein upon detecting the presence of signature fields or other metadata in a given document, including signature fields intended for a handwritten signature. The systems disclosed herein may also optionally leverage services provided by other software applications, such as electronic mail readers. The computer software applications disclosed herein may include a number of different modules, sub-modules, or other components of distinct functionality, and can provide information to, or receive information from, still other components and/or services. These modules can be used, for example, to communicate with input and/or output devices such as a display screen, a touch sensitive surface, a printer, and/or any other suitable input/output device. Other components and functionality not reflected in the illustrations will be apparent in light of this disclosure, and it will be appreciated that the claimed invention is not intended to be limited to any particular hardware or software configuration. Thus in other embodiments electronic signature server 300 may comprise additional, fewer, or alternative subcomponents as compared to those included in the illustrated embodiments.
The aforementioned non-transitory computer readable medium may be any suitable medium for storing digital information, such as a hard drive, a server, a flash memory, and/or random access memory. In alternative embodiments, the computers and/or modules disclosed herein can be implemented with hardware, including gate level logic such as a field-programmable gate array (FPGA), or alternatively, a purpose-built semiconductor such as an application-specific integrated circuit (ASIC). Still other embodiments may be implemented with a microcontroller having a number of input/output ports for receiving and outputting data, and a number of embedded routines for carrying out the various functionalities disclosed herein. It will be apparent that any suitable combination of hardware, software, and firmware can be used, and that the present invention is not intended to be limited to any particular system architecture.
Methodology
As illustrated in
For example, in a modified embodiment electronic signature server 300 is configured to detect the absence of an audio component and, in response to such detection, can be further configured to (a) prompt document originator 100 to record an appropriate audio component, and/or (b) automatically generate an audio component by leveraging functionality provided by text-to-speech module 610. In such cases, electronic signature server 300 can optionally be configured to detect the absence of an audio component before document originator 100 initiates a workflow involving the document. This allows the automatically-generated audio version to be prepared in advance, thereby making the system more responsive to the document originator's initiation of a workflow. In applications where multiple languages are to be supported, a translation service such as a computer-based automated translation service can be invoked to generate versions of the document in one or more alternative languages, which can then be submitted to text-to-speech module 610 for generation of corresponding audio versions of the document in the one or more alternative languages.
Certain embodiments are capable of supporting documents having audio components available in multiple languages. In particular, a determination can be made with respect to whether a given document includes or is otherwise associated with audio recordings in multiple languages (see reference numeral 10c in
Once an appropriate audio recording has been identified, document recipient 200 can then listen to the audio recording (see reference numeral 10d in
As disclosed herein, in certain embodiments one or more prerecorded responses 218 can be stored on the recipient's device 210 and applied to a received document in accordance with a command provided by document recipient 200. Thus document recipient 200 can be configured to determine whether one or more prerecorded responses 218 is available (see reference numeral 10f in
For instance, electronic signature server 300 can be configured to analyze the recipient's response and determine whether it sufficiently corresponds to a designated statement indicating assent to the document terms. Specifically, interactivity module 350 can instruct, via voice prompt, document recipient 200 to make a designated statement such as “I agree to the terms of this document,” to electronically sign a received document. Where the recipient's response sufficiently corresponds to the designated statement, the statement itself can be considered an electronic signature. Where the recipient's response does not correspond to the designated statement, it can be analyzed to determine whether it corresponds to other spoken commands recognized by interactivity module 350. In an alternative embodiment, document recipient 200 may annotate the received document, either in addition to or instead of electronically signing the document. Such annotation may be transcribed, for example, by leveraging functionality provided by speech-to-text module 620. Analyzing the recipient's response using resources provided by electronic signature server 300 and/or transcription services 600 advantageously eliminates any need for document recipient 200 to have a device capable of providing such functionality. This allows document recipient 200 to obtain such functionality using a traditional telephone, a public kiosk, or other device with limited audio processing capacity or software.
Document repository 500 may be updated depending on the nature of the recipient's response (see reference numeral 10g in
Thus, it will be appreciated that, in general, a wide range of supplemental processing can be invoked based on exactly how document recipient 200 responds to the request for an electronic signature (see reference numeral 10h in
Still referring to
Alternative Implementations
While the example embodiments illustrated in
It will be appreciated that in alternative embodiments document recipient 200 can be provided with the audio recording of the document without using facsimile machine 220. In such alternative embodiments, electronic signature server 300 can be configured to call a voice telephone number associated with document recipient 200 and interact with document recipient 200 using voice prompts and spoken commands. Such interaction could include a reading of the document terms. Likewise in certain alternative embodiments interactions between document originator 100 and electronic signature server 300 may take the form of voice prompts and spoken commands.
Once electronic signature server 300 has identified the received document, a determination can be made with respect to whether that document includes or is otherwise associated with audio recordings in multiple languages (see reference numeral 30d in
After listening to the audio recording, or being provided with the opportunity to listen to the audio recording, document recipient 200 can record an appropriate response using telephone 230 (see reference numeral 30g in
Numerous variations and configurations will be apparent in light of this disclosure. For instance one example embodiment of the present invention provides. In some cases a computer-implemented document processing method comprises receiving, from a document originator, metadata that identifies a document. The document includes a plurality of document terms and is to be distributed to a document recipient as part of a workflow. The method further comprises saving, in a document repository, an audio recording corresponding to at least a portion of the document terms that are included within the document. The method further comprises sending, to the document recipient in response to a received request, at least a portion of the audio recording. The method further comprises prompting the document recipient with an audible request to provide a spoken response to at least one of the document and the audio recording. The method further comprises receiving, from the document recipient, the spoken response. The method further comprises saving the spoken response such that it is correlated with the document. In some cases the spoken response comprises a phrase indicating that the document recipient has assented to the document terms. In some cases the method further comprises (a) comparing a voiceprint of the spoken response with a voiceprint of an authorized document signer; and (b) the spoken response is correlated with the document only if the voiceprint of the spoken response substantially matches the voiceprint of the authorized document signer. In some cases the method further comprises sending a notification based on the spoken response. In some cases the spoken response comprises a phrase indicating that the document recipient has assented to the document terms. In some cases the audible request to provide the spoken response is sent to the document recipient after the audio recording is sent to the document recipient. In some cases the at least a portion of the audio recording is sent to the document recipient via a public switched telephone network. In some cases the document is saved in the document repository such that the document is logically associated with the audio recording. In some cases the audio recording comprises a reading of at least a portion of the document terms by the document originator. In some cases the method further comprises (a) receiving the document from the document originator; and (b) submitting the document to a text-to-speech module configured to generate the audio recording based on the document. In some cases sending at least a portion of the audio recording to the document recipient comprises providing an audio stream of the audio recording to a player provided by the document recipient.
Another example embodiment of the present invention provides an electronic signature system that comprises a document repository storing a document comprising a plurality of document terms. The document repository also stores an audio recording corresponding to at least a portion of the document terms. The document repository also stores metadata indicating a status of the document with respect to a workflow. The system further comprises an interactivity module configured to send, to a document recipient in response to a received request, at least a portion of the audio recording and an audible request to provide a spoken response to the audio recording. The interactivity module is further configured to receive the spoken response from the document recipient. The system further comprises a document status module configured to modify the metadata stored in the document repository based on the spoken response received from the document recipient. In some cases the system further comprises a transcription services module configured to generate the audio recording based on text-to-speech processing of the document. In some cases (a) the spoken response comprises an electronic signature indicating that the document recipient has assented to the document terms; and (b) the metadata is modified to indicate that the document recipient has assented to the document terms. In some cases the system further comprises a transcription services module configured to generate a transcript of the spoken response. The transcript of the spoken response is stored in the document repository and includes a logical association to the document. In some cases the system further comprises an authentication services module configured to compare a voiceprint of the spoken response with a voiceprint of an authorized document signer. The metadata is modified only if the voiceprint of the spoken response substantially matches the voiceprint of the authorized document signer. In some cases the document repository comprises a networked storage resource provided remotely with respect to the interactivity module and the document status module.
Another example embodiment of the present invention provides a computer program product encoded with instructions that, when executed by one or more processors, causes a document workflow process to be carried out. The process comprises receiving, from a document originator, metadata that identifies a document. The document includes a plurality of document terms and is to be distributed to a document recipient as part of a workflow. The process further comprises saving, in a document repository, an audio recording corresponding to at least a portion of the document terms that are included within the document. The process further comprises sending, to the document recipient in response to a received request, at least a portion of the audio recording. The process further comprises prompting the document recipient with an audible request to provide an audible response to at least one of the document and the audio recording. The process further comprises receiving, from the document recipient, the audible response. The process further comprises saving the audible response such that it is correlated with the document. In some cases receiving the audible response comprises receiving a prerecorded response from the document recipient. In some cases (a) the audible response comprises a phrase indicating that the document recipient has assented to the document terms; (b) the document workflow process further comprises comparing a voiceprint of the audible response with a voiceprint of an authorized document signer; and (c) the audible response is correlated with the document only if the voiceprint of the audible response substantially matches the voiceprint of the authorized document signer. In some cases the audible response comprises touch-tones generated by a telephone.
The foregoing description of the embodiments of the present invention has been presented for the purposes of illustration and description. It is not intended to be exhaustive or to limit the invention to the particular disclosed embodiments. Many modifications and variations are possible in light of this disclosure. Thus it is intended that the scope of the invention be limited not by this detailed description, but rather by the claims appended hereto.
Number | Name | Date | Kind |
---|---|---|---|
4621334 | Garcia | Nov 1986 | A |
4805222 | Young | Feb 1989 | A |
5825880 | Sudia et al. | Oct 1998 | A |
5910987 | Ginter et al. | Jun 1999 | A |
6073101 | Maes | Jun 2000 | A |
6091835 | Smithies et al. | Jul 2000 | A |
6157935 | Tran | Dec 2000 | A |
6240091 | Ginzboorg et al. | May 2001 | B1 |
6615234 | Adamske et al. | Sep 2003 | B1 |
6691089 | Su | Feb 2004 | B1 |
6928421 | Craig et al. | Aug 2005 | B2 |
6959382 | Kinnis et al. | Oct 2005 | B1 |
7206938 | Bender | Apr 2007 | B2 |
7562053 | Twining | Jul 2009 | B2 |
7581109 | De Boursetty et al. | Aug 2009 | B2 |
7694143 | Karimisetty et al. | Apr 2010 | B2 |
7779355 | Erol | Aug 2010 | B1 |
7895166 | Foygel et al. | Feb 2011 | B2 |
7996367 | Foygel et al. | Aug 2011 | B2 |
7996439 | Foygel et al. | Aug 2011 | B2 |
8126868 | Vincent | Feb 2012 | B1 |
8230232 | Ahmed | Jul 2012 | B2 |
8234494 | Bansal | Jul 2012 | B1 |
8332253 | Farmer | Dec 2012 | B1 |
8443443 | Nordstrom | May 2013 | B2 |
8844055 | Follis et al. | Sep 2014 | B2 |
8918311 | Johnson | Dec 2014 | B1 |
8930308 | Johnson | Jan 2015 | B1 |
9058515 | Amtrup et al. | Jun 2015 | B1 |
9059858 | Giardina | Jun 2015 | B1 |
9292876 | Shimkus | Mar 2016 | B1 |
9432368 | Saxena et al. | Aug 2016 | B1 |
20010002485 | Brisbee | May 2001 | A1 |
20020038290 | Cochran | Mar 2002 | A1 |
20020062322 | Genghini | May 2002 | A1 |
20020091651 | Petrogiannis | Jul 2002 | A1 |
20020095290 | Kahn | Jul 2002 | A1 |
20020103656 | Bahler | Aug 2002 | A1 |
20030009513 | Ludwig | Jan 2003 | A1 |
20030037004 | Buffum et al. | Feb 2003 | A1 |
20030074216 | Salle | Apr 2003 | A1 |
20030083906 | Howell et al. | May 2003 | A1 |
20030130953 | Narasimhan | Jul 2003 | A1 |
20030154083 | Kobylevsky | Aug 2003 | A1 |
20030177361 | Wheeler | Sep 2003 | A1 |
20030187671 | Kumhyr et al. | Oct 2003 | A1 |
20030217275 | Bentley et al. | Nov 2003 | A1 |
20040024688 | Bi | Feb 2004 | A1 |
20040102959 | Estrin | May 2004 | A1 |
20040139344 | Maurer | Jul 2004 | A1 |
20040167847 | Nathan | Aug 2004 | A1 |
20040187037 | Checco | Sep 2004 | A1 |
20040204939 | Liu | Oct 2004 | A1 |
20040225887 | O'Neil | Nov 2004 | A1 |
20040243811 | Frisch et al. | Dec 2004 | A1 |
20040264652 | Erhart | Dec 2004 | A1 |
20050132196 | Dietl | Jun 2005 | A1 |
20050228665 | Kobayashi | Oct 2005 | A1 |
20050228999 | Jerdonek et al. | Oct 2005 | A1 |
20050289345 | Haas et al. | Dec 2005 | A1 |
20060020460 | Itou | Jan 2006 | A1 |
20060041828 | King | Feb 2006 | A1 |
20060110011 | Cohen et al. | May 2006 | A1 |
20060122880 | Franco | Jun 2006 | A1 |
20060143462 | Jacobs | Jun 2006 | A1 |
20060157559 | Levy et al. | Jul 2006 | A1 |
20060212813 | Yalovsky et al. | Sep 2006 | A1 |
20060253324 | Miller | Nov 2006 | A1 |
20060280339 | Cho | Dec 2006 | A1 |
20070055517 | Spector | Mar 2007 | A1 |
20070113164 | Hansen et al. | May 2007 | A1 |
20070124507 | Gurram | May 2007 | A1 |
20070143398 | Graham | Jun 2007 | A1 |
20070220614 | Ellis | Sep 2007 | A1 |
20070226511 | Wei et al. | Sep 2007 | A1 |
20080015883 | Hermann | Jan 2008 | A1 |
20080177550 | Mumm | Jul 2008 | A1 |
20080180213 | Flax | Jul 2008 | A1 |
20080195389 | Zhang | Aug 2008 | A1 |
20080209229 | Ramakrishnan | Aug 2008 | A1 |
20090025087 | Peirson | Jan 2009 | A1 |
20090062944 | Wood | Mar 2009 | A1 |
20090112767 | Hammad et al. | Apr 2009 | A1 |
20090116703 | Schultz | May 2009 | A1 |
20090117879 | Pawar et al. | May 2009 | A1 |
20090177300 | Lee | Jul 2009 | A1 |
20090222269 | Mori | Sep 2009 | A1 |
20090228584 | Maes et al. | Sep 2009 | A1 |
20090254345 | Fleizach | Oct 2009 | A1 |
20090254572 | Redlich | Oct 2009 | A1 |
20090260060 | Smith | Oct 2009 | A1 |
20090307744 | Nanda et al. | Dec 2009 | A1 |
20090327735 | Feng et al. | Dec 2009 | A1 |
20100131533 | Ortiz | May 2010 | A1 |
20100161993 | Mayer | Jun 2010 | A1 |
20100281254 | Carro | Nov 2010 | A1 |
20100306670 | Quinn et al. | Dec 2010 | A1 |
20110022940 | King | Jan 2011 | A1 |
20110047385 | Kleinberg | Feb 2011 | A1 |
20110212717 | Rhoads et al. | Sep 2011 | A1 |
20110225485 | Schnitt | Sep 2011 | A1 |
20120072837 | Triola | Mar 2012 | A1 |
20120190405 | Kumaran | Jul 2012 | A1 |
20120254332 | Irvin | Oct 2012 | A1 |
20130006642 | Saxena | Jan 2013 | A1 |
20130046645 | Grigg | Feb 2013 | A1 |
20130089300 | Soundararajan | Apr 2013 | A1 |
20130103723 | Hori | Apr 2013 | A1 |
20130132091 | Skerpac | May 2013 | A1 |
20130138438 | Bachtiger | May 2013 | A1 |
20130166915 | Desai | Jun 2013 | A1 |
20130179171 | Howes | Jul 2013 | A1 |
20130182002 | Macciola | Jul 2013 | A1 |
20130191287 | Gainer, III | Jul 2013 | A1 |
20130263283 | Peterson | Oct 2013 | A1 |
20130269013 | Parry | Oct 2013 | A1 |
20130283189 | Basso | Oct 2013 | A1 |
20130326225 | Murao | Dec 2013 | A1 |
20130339358 | Huibers | Dec 2013 | A1 |
20130346356 | Welinder | Dec 2013 | A1 |
20140019761 | Shapiro | Jan 2014 | A1 |
20140019843 | Schmidt | Jan 2014 | A1 |
20140078544 | Motoyama | Mar 2014 | A1 |
20140079297 | Tadayon et al. | Mar 2014 | A1 |
20140108010 | Maltseff | Apr 2014 | A1 |
20140168716 | King | Jun 2014 | A1 |
20140236978 | King | Aug 2014 | A1 |
20140244451 | Mayer | Aug 2014 | A1 |
20140279324 | King | Sep 2014 | A1 |
20140282243 | Eye et al. | Sep 2014 | A1 |
20140294302 | King | Oct 2014 | A1 |
20140343943 | Al-Telmissani | Nov 2014 | A1 |
20140365281 | Onischuk | Dec 2014 | A1 |
20140372115 | LeBeau | Dec 2014 | A1 |
20150012417 | Joao | Jan 2015 | A1 |
20150016661 | Lord | Jan 2015 | A1 |
20150063714 | King | Mar 2015 | A1 |
20150073823 | Ladd et al. | Mar 2015 | A1 |
20150100578 | Rosen | Apr 2015 | A1 |
20150213404 | Follis | Jul 2015 | A1 |
20150245111 | Berry | Aug 2015 | A1 |
20150294094 | Hefeeda | Oct 2015 | A1 |
20160078869 | Syrdal | Mar 2016 | A1 |
Number | Date | Country |
---|---|---|
WO000148986 | Jul 2001 | WO |
Entry |
---|
Craig Le Clair, “What to Look for in E-Signature Providers” (Nov. 15, 2011). Available at https://www.echosign.adobe.com/content/dam/echosign/docs/pdfs/Forrester_What_To_Look_For_In_E-Signature_Providers_Nov_2011.pdf. |
Simske, Steven J. Dynamic Biometrics: The Case for a Real-Time Solution to the Problem of Access Control, Privacy and Security. 2009 First IEEE International Conference on Biometrics, Identiy and Security. Pub. Date: 2009. http://ieeexplore.ieee.org/stamp/stam.jsp?tp=&arnumber=5507535. |
Maeder, Anthony; Fookes, Clinton; Sridharan, Sridha. Gaze Based User Authentication for Personal Computer Applications. Proceedings of 2004 International Symposium on Intelligent Multimedia, Video and Speech Processing. Pub. Date: 2004. http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=1434167. |
Notice of Allowance received in U.S. Appl. No. 14/107,967 (dated Nov. 3, 2016) (6 pages). |
Notice of Allowance received in U.S. Appl. No. 14/551,560 (dated Nov. 4, 2016) (19 pages). |
Notice of Allowance received in U.S. Appl. No. 14/534,583 (8 pages) (dated May 2, 2017). |
Araújo et al., “User Authentication Through Typing Biometrics Features”, IEEE Transactions on Signal Processing, vol. 53, No. 2, pp. 851-855 (2005). |
Deng et al., “Keystroke Dynamics User Authentication Based on Gaussian Mixture Model and Deep Belief Nets”, ISRN Signal Processing, vol. 2013, Article ID 565183, 7 pages (2013). |
Moskovitch et al., “Identity Theft, Computers and Behavioral Biometrics”, Proceedings of the 2009 IEEE International Conference on Intelligence and Security Informatics, pp. 155-160 (2009). |
Notice of Allowance in related U.S. Appl. No. 14/859,944 dated Mar. 1, 2017, 10 pages. |
Notice of Allowance received in U.S. Appl. No. 14/840,380 (11 pages) (dated Aug. 23, 2017). |
Number | Date | Country | |
---|---|---|---|
20150127348 A1 | May 2015 | US |