When two devices are incompatible or lack access to a connecting network, data transfer between them might still be possible using displayed and captured visual codes such as a QR code, a PDF417 code, etc.
However, the standards for visual codes employ maximum data limits which are smaller than the size of many image files or files containing non-image biometric data, which are often used to prove identity, e.g., to law enforcement officials or airlines.
In an example embodiment, a method is described. According to the method, software on an image-capturing device iteratively captures a visual code in a series of visual codes displayed in a repeating progression on a screen of a mobile device. The visual code was generated from a display block that resulted from a partition of an original data file into a series of display blocks of at least a specified size. Each display block has a header that includes an ordered identifying block number and a count of the display blocks in the series. The software converts the visual code back into a display block and reads a header for the display block, discarding the display block if it has already been captured, as determined by the ordered identifying block number in the header. The software removes the header of the display block and records the ordered identifying block number, if the display block has not been discarded. The software stops the iterative capturing when all of the display blocks in the series have been captured, as determined by the count in the header and coalesces the captured display blocks into the original data file, using an order determined by the ordered identifying block numbers. Then the software compares the original data file with a copy of the original data file obtained from another source, in an example embodiment.
In another example embodiment, another method is described. According to the method, software on a mobile device partitions an original data file into a series of display blocks of at least a specified size. Then the software adds a header to each display block. The header includes an ordered identifying block number and a count of the display blocks in the series. The software generates a visual code for each display block. And the software iteratively presents each of the visual codes in the series in a repeating progression displayed on a screen of a mobile device for capture by an image-capturing device which converts each visual code back into a display block and coalesces the captured display blocks into the original data file, using an order determined by the ordered identifying block numbers.
In another example embodiment, another method is described. According to the method, software on an image-capturing device iteratively captures a QR code in a series of QR codes displayed in a repeating progression on a screen of a mobile device. The QR code was generated from a display block that resulted from a partition of an original data file which had been digitally signed with a private key into a series of display blocks of at least a specified size. Each display block has a header that includes an ordered identifying block number and a count of the display blocks in the series. The software converts the QR code back into a display block and reads a header for the display block, discarding the display block if it has already been captured, as determined by the ordered identifying block number in the header. The software removes the header of the display block and records the ordered identifying block number, if the display block has not been discarded. The software stops the iterative capturing when all of the display blocks in the series have been captured, as determined by the count in the header and coalesces the captured display blocks into the original data file, using an order determined by the ordered identifying block numbers. Then the software hashes a copy of the original data file and uses the hashed copy and a public key matching the private key to verify the original data file that had been digitally signed and the digital signing.
In another embodiment, a method of confirming receipt is described. The method includes iteratively capturing by a receiving device visual codes in a series of visual codes displayed in a repeating progression on a screen of a sending device, wherein a corresponding captured visual code was generated from a display block that resulted from a partition of an original data file into a series of display blocks of at least a specified size and wherein each display block is converted to a corresponding string and corresponding header that includes an ordered identifying display block number and a total count of the display blocks in the series, wherein each corresponding string is converted to a corresponding visual code. The method includes converting each of the captured visual codes into a corresponding string and reading a header for the corresponding string. The method includes determining which display blocks have been captured in the series of display blocks. The method includes generating a confirmation message including information indicating which display blocks have been received. The method includes sending the confirmation message over a wireless communication link to the sending device in order to reduce the number of visual codes being displayed by the sending device.
In still another embodiment, a non-transitory computer-readable medium storing a computer program for confirming receipt is described. The computer-readable medium includes program instructions for iteratively capturing by a receiving device visual codes in a series of visual codes displayed in a repeating progression on a screen of a sending device, wherein a corresponding captured visual code was generated from a display block that resulted from a partition of an original data file into a series of display blocks of at least a specified size and wherein each display block is converted to a corresponding string and corresponding header that includes an ordered identifying display block number and a total count of the display blocks in the series, wherein each corresponding string is converted to a corresponding visual code. The computer-readable medium includes program instructions for converting each of the captured visual codes into a corresponding string and reading a header for the corresponding string. The computer-readable medium includes program instructions for determining which display blocks have been captured in the series of display blocks. The computer-readable medium includes program instructions for generating a confirmation message including information indicating which display blocks have been received. The computer-readable medium includes program instructions for sending the confirmation message over a wireless communication link to the sending device in order to reduce the number of visual codes being displayed by the sending device.
In another embodiment, a computer system is described and includes a processor and memory coupled to the processor and having stored therein instructions that, if executed by the computer system, cause the computer system to execute a method for confirming receipt. The method includes iteratively capturing by a receiving device visual codes in a series of visual codes displayed in a repeating progression on a screen of a sending device, wherein a corresponding captured visual code was generated from a display block that resulted from a partition of an original data file into a series of display blocks of at least a specified size and wherein each display block is converted to a corresponding string and corresponding header that includes an ordered identifying display block number and a total count of the display blocks in the series, wherein each corresponding string is converted to a corresponding visual code. The method includes converting each of the captured visual codes into a corresponding string and reading a header for the corresponding string. The method includes determining which display blocks have been captured in the series of display blocks. The method includes generating a confirmation message including information indicating which display blocks have been received. The method includes sending the confirmation message over a wireless communication link to the sending device in order to reduce the number of visual codes being displayed by the sending device.
In still another embodiment, a method for confirming receipt is described. The method includes iteratively capturing by a receiving device visual codes in a series of visual codes displayed in a repeating progression on a screen of a sending device, wherein a corresponding captured visual code was generated from a display block that resulted from a partition of an original data file into a series of display blocks of at least a specified size and wherein each display block is converted to a corresponding string and corresponding header that includes an ordered identifying display block number and a total count of the display blocks in the series, wherein each corresponding string is converted to a corresponding visual code. The method includes converting each of the captured visual codes into a corresponding string and reading a header for the corresponding string. The method includes determining which display blocks have been captured in the series of display blocks. The method includes generating a confirmation message including information indicating which display blocks have been received. The method includes displaying the confirmation message for capture by the sending device in order to reduce the number of visual codes being displayed by the sending device.
Other aspects and advantages of the inventions will become apparent from the following detailed description, taken in conjunction with the accompanying drawings, which illustrate by way of example the principles of the inventions.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the exemplary embodiments. However, it will be apparent to one skilled in the art that the example embodiments may be practiced without some of these specific details. In other instances, process operations and implementation details have not been described in detail, if already well known.
Verification and Certification Using a Block Chain
In one embodiment, a ShoCard Identity Platform is a technology layer that interacts with a blockchain. The blockchain can then securely hold data used for certifying identity transactions. For example, a blockchain technology forms the heart of the cryptocurrency, Bitcoin. In addition, the blockchain technology is used by several integrated systems provided by ShoCard, e.g., for systems other than currency transactions, in embodiments of the present invention.
In one use example, to register with ShoCard, a certification process is processed. In one embodiment, a user can scan using a mobile device a driver's license or passport, and a software application or device grabs the individual field within that, such as name, license number, passport number, date of birth (or other data). The data may also be gathered individually or manually. The data is then processed to produce a hash of the data. In this example, the private key that is on the mobile device can be used to create a digital signature of that hash, and that's the piece of data that is stored to the blockchain. In one configuration, the various fields are put together in one record to create an ID for that user.
If the user then provides the raw data with a public key and a pointer to that record on the blockchain, the data can be verified. This provides a correlation between the data that the user has on the mobile device and what's on the blockchain.
In still other embodiments, following the registration process, users can be certified by some other trusted party such as a bank or KYC checking company, which then issues a certification for the user. By way of example, these certifiers can use their own private key to write the records on the blockchain, pointing to that user's record entry that's also on the blockchain. This may be referred to as a ShoCard ID, or generally, the User ID. In this example, there are two steps: one is the registration where hash signatures of the individual fields are placed on the blockchain; and the second one is a certification.
Understanding the basics discussed above, the system and methods process operations referred to as “sealing” and “certifying.” Sealing is the process of hashing and encrypting the user's ShoCard data and storing it in the blockchain. Once it is sealed in the blockchain, the data becomes a permanent record. The user may change his or her ShoCard ID, but the user will have to re-Seal it, and create a new blockchain record. No readable information is stored in the blockchain, only an indecipherable hash that can only be unlocked by a corresponding private key, which is always controlled by the user.
“Certifying” the ShoCard ID is the process of another party acknowledging the accuracy of your ShoCard ID and marking it so they will recognize it as accurate again in the future without having to see any other evidence of identity beyond your ShoCard ID. To certify a ShoCard ID, you send your encrypted ShoCard ID to the certifier. The certifier will decrypt it and match the hash the user sent them to the hash stored in the blockchain. This proves that the user has the private keys that created both records. If the certifier is satisfied that the user is as identified, the certifier can create a new record with their own private keys that references the user's ShoCard ID. In the future, when the user presents his or her ShoCard ID, they will check their certification to make sure the user is presenting the same ShoCard ID, previously certified.
In one embodiment, the digital-signature logic 221 then passes the signed hash value and the public key to a user accessible interface 226 (e.g., a graphical user interface or GUI), which might be other software running on the input device 212. In an example embodiment, the user accessible interface 226 might be part of an application or app that includes encryption logic, hashing logic 220, and digital-signature logic 221, and/or other modules or code. The user accessible interface 226 might be used by the user to transmit the digitally signed hash value and the public key to a public storage facility 228 via a line 230, and receives back from the public storage facility 228 a transaction number 232 corresponding to the transmitted hash value and public key. In an alternative example embodiment, only the signed hash value might be transmitted to public storage facility 228 by the user and persons retrieving the signed hash value might obtain the public key from elsewhere (e.g., the user, a public database, an Internet repository, a website, etc.). As is well known, there is no need to keep public keys secure, and in fact, the algorithms using public/private key pairs are design to enable full sharing of public keys. The private key, on the other hand, must be kept secure, as noted above.
In one embodiment, the public storage facility 228 can take the form of a block chain (e.g., in a bitcoin online payment system) or any other public or private distributed database. The public storage facility 228 is connected to a communication link via a line and can be adapted to communicate over a public computer network, the interne, an intranet, an extranet, or any private communication network. Broadly speaking, the public storage facility 228 is accessible by any device that has an Internet connection over a network.
As indicated above, in an example embodiment, the input data might be hashed and the resulting hash value might be signed with a digital signature, created using a private key paired with a public key, before transmission, optionally along with the public key, from the input device (e.g., a user's smartphone) 212 to the public storage facility 228 for storage. The user accessible interface 226 is thus adapted to “seal” the signed hash value and the public key in the public storage facility 228. In one embodiment, once the hash value, and, optionally, the public key of the user is written to the block chain in a transaction, a later verification may be made if another party is able to hash the same input data.
The identification card 302 can be a government issued form of identification such as a driver license, passport, employee badge, military identification, political documentation, or the like. The identification card 302 can also be a privately issued form of identification such as a student ID, library card, social club car, or any other form of identification issued by a third party.
In one embodiment, as indicated by triangle 314, an input device 312 may be used to input such personal data from the identification card 302 to provide input data. Input device 312 can take many forms. For example, input device 312 can be a digital scanner, digital camera, or smartphone (e.g., with the camera commonly found in smartphones) for reading data from the identification card 302, including any codes appearing on the card 302. The input device 312 can also be a device for manually inputting personal data such as a keyboard, touchscreen, voice recognition device, handwriting recognition device, or other manual input device.
As shown in
The input data collected from the input device 312 (e.g., a user's smartphone) is passed to encryption logic 318 on input device 312. In an example embodiment, encryption logic 318 might include software, firmware, hardware, or any combination thereof, and consist of one or more encryption algorithms, e.g., an RSA encryption algorithm. Encryption logic 318 encrypts the input data with a public key to provide encrypted data. The public key is paired with an associated private key as is conventional when generating such keys using an RSA encryption algorithm, an Elliptic Curve Digital Signature Algorithm (ECDSA), or other encryption algorithm known to those skilled in the art. This encrypted data can then be stored locally on the input device 312 for added security. It can then only be accessed with the private key of the user on the input device 312, which might be stored in a more secure part of input device 212, e.g., “the Keychain”, if input device 312 is an iOS (e.g., operating system used by devices made by Apple, Inc.) smartphone. If the device is of a different type, e.g., one using an Android OS (e.g., operating system by Google, Inc.), similar secure device storage methods may be used. In this manner, for added security, the private key is not compromised and is kept safely on the input device 312. It should be understood that the private key may be stored on another device, but similar or additional security should be processed to ensure that the private key is not compromised.
As noted above, the operations to be performed by the hashing logic 320 can proceed directly after receiving the input data from the input device 312. In this embodiment, the hashing logic 320 is used for hashing the input data (or selected fields of the input data or personal data) to provide or generate a hash value. The hash value is sometimes referred to as “hash data,” that is generated by an algorithm. In an example embodiment, hashing logic 320 might be software, firmware, hardware, or any combination thereof, and consist of one or more hashing algorithms, e.g., a Secure Hash Algorithm (SHA) algorithm. Hashing logic 320 passes the hash value to digital-signature logic 321, which performs a digital signature on the hash value, using the private key on the input device 312. In an example embodiment, digital-signature logic 321 might be a component (or module) of encryption logic 318. In other embodiments, the digital-signature logic 321 may be defined by separate code, firmware, and/or hardware.
In one embodiment, the digital-signature logic 321 then passes the signed hash value and the public key to a user accessible interface 326 (e.g., a graphical user interface or GUI), which might be other software running on the input device 312. In an example embodiment, the user accessible interface 326 might be part of an application or app that includes encryption logic 318, hashing logic 320, and digital-signature logic 321, and/or other modules or code. The user accessible interface 326 might be used by the user to transmit the digitally signed hash value and, optionally, the public key to a public storage facility 328 via a line 330, and receive back from the public storage facility 328 a transaction number 332 corresponding to the transmitted hash value and public key.
In one embodiment, the public storage facility 328 can take the form of a block chain (e.g., in a bitcoin online payment system) or any other public or private distributed database. The public storage facility 328 is connected to a communication link via a line and can be adapted to communicate over a public computer network, the interne, an intranet, an extranet, or any private communication network. Broadly speaking, the public storage facility 328 is accessible by any device that has an Internet connection over a network.
As indicated above, in an example embodiment, the input data (or selected fields of the input data) might be hashed and the resulting hash value might be signed with a digital signature, created using a private key paired with a public key, before transmission, along with, optionally, the public key, from the input device (e.g., a user's smartphone) 312 to the public storage facility 328 for storage. The user accessible interface 326 is thus adapted to “seal” the signed hash value and the public key in the public storage facility 328. In one embodiment, once the hash value, and, optionally, the public key of the user is written to the block chain in a transaction, a later verification may be made if another party is able to hash the same input data.
The user accessible interface 326 (e.g., a GUI) can be controllable by the user of the input device 312 to encrypt and provide the transaction number 332, the input data (or selected fields of the input data), and, optionally, the public key to an input device 342 (e.g., a smartphone) of a certifier. In an example embodiment, the encryption might be performed by the encryption logic 318 using a public key of a certifier paired with a private key of the certifier. Then, coding logic on the input device 312 might code the encrypted transaction number 332, the input data (or selected fields of the input data), and, optionally, the public key into a barcode or QR code and the certifier might use input device 342 to scan the barcode or QR code and decode it to gain access to the encrypted items. Thereafter, the certifier might decrypt the encrypted items using the private key of the certifier and verify them, e.g., using a “verify” function call to an RSA algorithm as explained in further detail below.
Once the certifier's input device 342 receives the barcode or QR code, decoding logic on the certifier's input device 312 might decode the barcode or QR code and decryption logic 370 on the certifier's input device 342 might use the certifier's private key to decrypt the encrypted items. In an example embodiment, decryption logic 370 might be a component (or module) of more general encryption logic. In one embodiment, the decrypted input data (or selected fields of the input data) might be hashed into a hash value by hashing logic 372 on the certifier's input device 342, using the same hashing algorithm that was used to create the hash value that was digitally signed by the user. And the decrypted transaction number 332 might be used by a user accessible interface 380 (e.g., a GUI) to access the public storage facility 328 (e.g., the block chain) and retrieve the signed hash value and public key of the user. The retrieved signed hash value, the generated hash value, and the retrieved or obtained public key might then be input to verifying logic 373 for verification (e.g., through a “verify” function call to an RSA algorithm), which outputs a “true” value if the two hash values are the same and the public key is associated with the signature or a “false” value if the two hash values are not the same or the public key is not associated with the signature. In an example embodiment, verifying logic 373 might be a component (or module) of decryption logic 370. In another embodiment, the verifying logic 373 may be a separate module, software, firmware and/or hardware. As indicated above, in an example embodiment, the public key of the user might be obtained from some other source than the public storage facility 328 (e.g., from the user), in an example embodiment.
Transferring Large Data Sets Using Visual Codes
It is desirable for two devices to communicate with one another even if there is no connectivity between the devices. The assumption is that the devices have no internet connectivity, Wi-Fi connectivity, compatible Bluetooth, NFC, or other communication capability. This might be the case if the devices are from different manufacturers (e.g., Apple and Samsung running iOS and Android, respectively) or if they are in a location where there is no connectivity option available. For example, one user may be met with a police officer who wishes to share the individuals' data, but they are in a faraway mountainous area without connectivity. Another example is when an individual travels to another country and doesn't have connectivity. This is also common when a passenger wants to share a digital identity when traveling to another country where he/she does not have interne coverage.
Small sets of data can be passed by creating a QR code (or equivalent) on one device and having a second device scan that QR code (or equivalent). However, such codes are typically limited in size and usually carry up to a maximum of 2 k or 4 k bytes of data. If a user wishes to share larger sets of data, such as images of a drivers' license, image of a passport, meta-data about the user, a selfie image of the user or other such data that usually requires larger sets of data, using a QR Code or equivalent typically cannot accommodate the need.
To accommodate this need, the user intending to send data might use software (e.g., a mobile-device app) to break up the large data set into smaller chunks that can each fit into a smaller QR Code (or equivalent). Each block needs to be appended with sufficient control information so that it is uniquely identified as the given block in the sequence of blocks. For example, a 20 k block of data can be broken up into 10 2 k blocks and each block can have a reference number of the block number it belongs to and a count of the total blocks. Some of the information might be maintained in the first block or repeated in each block. Hence, a block of data in its simplest form might have the form:
<block-number>,<total-blocks>.<data>
Any form of syntax might be used to separate out the fields. In this example, commas and periods are used, but any equivalent format can be utilized. The sender might use a mobile-device app to rotate between each QR code and display each one briefly. When the last QR code is displayed, the mobile-device app might rotate again and begin from the first QR code and continuously loop so long as the data is being shared and until the user closes the code-display screen. The mobile-device app swapping through the QR codes might determine how quickly the QR codes rotate. A number of variables can influence the speed that the mobile-device app might rotate the QR codes. For example, if the quality of scanner is good in terms of accuracy, then the speed of rotation might be greater and/or the block sizes might be denser (which results in fewer blocks being needed to pass the entire data).
On the receiving side, software on the second device must then scan the screen of the first device with the QR codes until all blocks are read. It is most likely that the blocks will not be scanned in sequence as the scanner may skip some blocks and will need to pick them up on a subsequent rotation. The software on the second device must keep track of the total blocks and account for each unique block that has been read. Once all blocks are accounted for, it can assemble the data of the blocks together according to their sequence numbers and form one single data block that represents the original data set that the sender had sent over.
The assumption in a secure-envelope exchange is that the two users already know one another and are aware of each other's SealId, e.g., an identifier that identifies a user for purposes of identification software, e.g., which might be distributed as a mobile-device app. Additionally, each of the two users will have access to the public key associated with the private key which the other user used to seal his/her identification. User A intends to send a block of data to User B. User A places that data in a data block and may add additional identification fields to that block of data such as timestamp, the User A's own SealId, User B's SealId, User A's public key and, if there is a session between the two, a sessionId. The value of the timestamp and the sessionId is to ensure vitality of the data block versus one that may have been created and somehow reused again. This data block will be referred to as <dataBlock>. Next, User A uses his/her own private key to digitally sign the <dataBlock> that was created. The result is <envelopeSignature>. Next, an <envelopeContent> is created by putting together the <dataBlock> and the <envelopeSignature>. Then, a <secureEnvelope> is created by encrypting the <envelopeContent> with User B's public key. This secure envelope can now be transmitted to user B directly. User B can view the <envelopeContent> by decrypting the <secureEnvelope> using his/her private key that no one else has access to. User B might then verify the <dataBlock> in the envelope content by verifying the <dataBlock> and the <envelopeSignature> with the User A's public key that was passed. User B might also verify that this is the same public key that was used in User A's Sealld. There is no restriction as to how User A passes the secure envelope to User B. It can be done via email, a text message, NFC, or any other form of digital communication where the data can be passed, including through the visual codes described in the present application. Because it is encrypted using User B's public key, only User B can view the message and no other user can modify it either. In fact, after the secure envelope is created, even User A can no longer view its content.
In an example embodiment, device 4102 is a mobile device such as a smartphone on which an original data file (e.g., an image file) is stored, e.g., in volatile memory or persistent storage, and device 4103 is an image-scanning device such as a smartphone, a kiosk, a handheld scanner, etc. Also, in an example embodiment, device 4102 includes functionality (e.g., hardware, firmware, and/or software) for converting a data file into a QR code such as a 1-D barcode, a 2-D (or matrix) barcode, a 3-D barcode, etc. And device 4103 includes functionality (e.g., hardware, firmware, and/or software) for capturing (or scanning) a visual code such as a 1-D barcode, a 2-D (or matrix) barcode, a 3-D barcode, etc. Examples of 2-D (or matrix) barcodes which might be used with the example embodiments described herein include: Aztec Code, Code 1, ColorCode, Color Construct Code, CrontoSign, CyberCode, d-touch, DataGlyphs, Data Matrix, Dastastrip Code, digital paper, EZcode, High Capacity Color Barcode, Han Xin Barcode, HueCode, InterCode, MaxiCode, MMCC, NexCode, Nintendo e-Reader Dot code, PDF417, Qode, QR code, ShotCode, SPARQCode, and VOICEYE.
More generally, devices 4102 and 4103 might include (1) hardware consisting of one or more microprocessors and/or microcontrollers (e.g., from the ARM family or the x86 family), volatile storage (e.g., RAM), and persistent storage (e.g., flash memory such as microSD) and (2) an operating system (e.g., Android, iOS, Chrome OS, Windows Phone, Linux, Firefox OS, etc.) that runs on the hardware. Additionally, devices 4102 and 4103 might include a screen capable of displaying a visual code (e.g., a 2D or matrix barcode) and a camera or barcode scanner capable of capturing a visual code (e.g., a 2D or matrix barcode).
In an example use case, a law enforcement officer in a patrol car might pull over a driver in a remote location. The driver might be the user of device 4102, a smartphone, and the original data file might be an image file depicting the driver which was taken by a DMV (Department of Motor Vehicle) official, e.g., for a driver license. The law enforcement officer might be the user of device 4103, which is also a smartphone, which might have network connectivity to physical storage in the patrol car storing a database of DMV images for drivers licensed in a jurisdiction. Or the patrol car might have network connectivity to a public or private block-chain database storing DMV images for drivers licensed in a jurisdiction. The law enforcement officer may not have any connectivity to his patrol vehicle or an internet, but may still wish to extract the driver license information from the driver via device 4102 for partial validation.
In another example use case, the user of device 4102, a smartphone, might be a traveler and the original data file might be an image file depicting the traveler which was taken for a state department, e.g., for a passport. Device 4103 might be an airline kiosk, which might have network connectivity to physical storage at the airline or at the state department storing a database of passport images for a jurisdiction. In an example embodiment, the database might be a public or private block-chain database storing passport images for a jurisdiction.
As depicted in
As also depicted in
In another example embodiment, the software on the mobile (or sending device) might generate the visual code directly from a display block with a header, e.g., if the standard for the visual code allows for such a conversion. In that example embodiment, the software on the image-scanning device might convert the visual code directly back into a display block before checking for duplication or removing the header and recording the block number.
In an example embodiment, one or more of the above operations might be performed by a separate thread in a process, in order for the operations to occur in parallel. Also, in an example embodiment, the display block might not be converted into a string prior to generation of a visual code for the display block. For example, the display block might be converted into a byte array or some other byte grouping. In that event, the visual code would not need to be converted back into a string before conversion back into a display block. In other examples, the original data may be converted into another format before transfer such as base64, base128, Hex, Octal, or other formats that allow binary data to be represented in ASCII format.
Moreover, in an example embodiment the original data file might not be an image file. Rather the original data file might be non-image biometric data concerning the user of the mobile device, e.g., fingerprint, voice data, and/or genetic-identification data.
Other use cases for transferring an original data file are described in
As depicted in
As depicted in
Feedback Confirmation
When transferring large data sets, it is useful to break up the data into several smaller blocks. For example, if the blocks of data are transferred using QR-Code, each block can be limited to about 2 k Bytes, as previously described. In one embodiment, to make scanning the data easy, a smaller size of about 1 k Bytes may be used. For example, to transfer a 40K byte data set, the process might then require as many as 40 data blocks to be presented for a scanner to read. The sending device may have no way of knowing which blocks have been read without implementing embodiments of the present invention. As such, the sending device will therefore continue to display the data records in sequence and repeat the process until the process is terminated. This can significantly slow down the process. For example, if 39 of the 40 blocks have been read by the receiving device, the sending device will continue to rotate between all 40 blocks even if the receiving device is only interested in a single block. If somehow, the receiving device continually misses the block of interest, the process will continue until such time that all blocks have been read.
Embodiments of the present invention provide for the receiving device to deliver feedback (e.g., a confirmation message) as to which display blocks have been read, thereby speeding up the transfer process. Because the sending device is configured to receive that feedback information, the sending device could avoid displaying visual codes corresponding to display blocks already scanned (e.g., read) and the process can be significantly expedited.
The receiving device can provide feedback to the sending device. In one embodiment, the feedback includes information indicating all display blocks that have been properly received so far. In another embodiment, the feedback includes information indicating display blocks that the receiving device has not yet received.
Any number of methods can be used to communicate this feedback to the sender, in embodiments. In one embodiment, a visual code (such as a QR-Code) is displayed on the screen of the receiving device, wherein the visual code includes the confirmation message. The sending device is configured to scan the visual code simultaneously as the sending device generates its data codes (e.g., visual codes) to be scanned by the receiving device. In particular, the receiving device can embed a series of numbers in sequence that represent either the received or the not-received blocks.
While the sending device 910 loops through a display of a series of visual codes (e.g., QR-Codes), the sending device also looks to read a visual code from the receiving device that includes the confirmation message. For example, the receiving device 920 includes the front camera 925, and screen 923. The receiving device refreshes screen 923 to display a current visual code that represents the feedback (e.g., confirmation message) for the sending device. In one embodiment, the feedback is displayed on the receiving device while simultaneously reading data from the sending device 910. As shown, the feedback is displayed in the form of a QR code 927, in one embodiment. Other embodiments are well suited to support other formats, such as a two-dimensional (2D bar code), etc.
Other forms of communicating the feedback are also possible. In other embodiments, the feedback is communicated using a wireless communication link. For example, the feedback may be communicated over a Bluetooth link, in one embodiment. In another embodiment, the feedback may be communicated over a Bluetooth Low Energy (BLE) link. In still another embodiment, the feedback may be communicated over a near field communication (NFC) link. These links are provided for illustration, and other forms of communicating the feedback are also possible.
While the sending device 910 loops through a display of a series of visual codes (e.g., QR-Codes), the sending device also looks to receive feedback from the receiving device that includes the confirmation message, wherein the feedback may be communicated over a wireless communication link 930. In one embodiment, the wireless communication link is configured for local one-way communication, such as through broadcasting, or advertising the confirmation message. This reduces the complexity of the transaction, by not requiring any handshaking protocol between the two devices. As shown in
At 1001, the method includes iteratively capturing by the receiving device visual codes. The sending device displays a series of visual codes in a repeating progression on a screen. The sending device may display a subset of the series of visual codes, such as a subset formed from a smaller group of the series. The series and/or subset of visual codes may be modified to remove visual codes of display blocks that have been received by the receiving device. In some cases, the subset defines the modified version of the series or subset of the series of visual codes.
At 1002, the method includes converting each of the captured visual codes into a corresponding string and reading a header for the corresponding string. As previously described, a corresponding captured visual code is generated from a display block that resulted from a partition of an original data file into a series of display blocks. In one embodiment, the display blocks are of at least a specified size. Each display block is converted to a corresponding string and corresponding header, wherein the header includes an ordered identifying display block number and a total count of the display blocks in the series. Each corresponding string is converted to a corresponding visual code.
At 1003, the method includes determining which display blocks have been captured in the series of display blocks. From each header that is read corresponding to captured visual codes, the receiving device knows the total count of display blocks, and the corresponding display block number. As such, each of the display block numbers of captured visual codes and their corresponding display blocks are also known. In addition, if the series of visual codes is broken into smaller groups, then the corresponding group numbers for the captured visual codes are also know. Group numbers are known based on the algorithm used for compartmentalizing the visual codes into groups.
At 1004, the method includes generating a confirmation message that includes information indicating which display blocks have been received. In one embodiment, the information explicitly recites which display blocks have been received. In another embodiment, the information explicitly recites which display blocks have not been received, and as such, by inference and knowing how many display blocks are in the series, the display block that have not been received is also known. A more detailed description of the confirmation message is provided below.
At 1005, the method includes sending the confirmation message over a wireless communication link to the sending device in order to reduce the number of visual codes being displayed by the sender, in one embodiment. For example, the wireless communication link may be one of a Bluetooth, BLE, NFC, a one-way communication link, or any local communication link.
In other embodiments, the confirmation message is delivered over a communication medium other than a wireless communication link. For example, the confirmation message may be displayed by the receiving device for capture by the sending device (e.g., using a camera or image capture device). By communicating which display blocks have been received and their corresponding visual codes, the number of visual codes being displayed by the sending device may be reduced.
Using a Bluetooth or BLE format, the sending device 910 can start the display process by showing a visual code that represents the Bluetooth beacon ID to use for communicating messages. In particular, the sending device will listen to messages including the beacon ID. For example, at 1010 the sending device advertises the beacon ID 1011. As such, the receiving device 920 is able to pick up the beacon ID 1011.
In one embodiment, the sending device 910 can initiate the process for the data exchange by displaying all of the control information in an initial visual code (e.g., a QR Code). This initial request can advertise the Beacon Identifier (ID) that the sending device will be listening for. In one embodiment, the control information includes the number of bits to be used for the Group numbers. Other control information such as the total number of data blocks may also be included in this initial visual code or transmitted, as previously described. Once the receiving device scans this initial block, the receiving device can then proceed to provide feedback to the sending device that the receiving device is ready. The receiving device can do this by sending feedback beacons using the Beacon ID which the sending device advertised, and provide the value integer of all zeros which means no display block has been received, as will be described below.
Simultaneously, the sending device 910 visually displays a series of visual codes 1060, or a subset of the series, wherein the series or subset may have been modified to remove visual codes that have been received by the receiving device 920 (as indicated though confirmation messages). The subset may be defined to include the modifications.
At 1030, the receiving device 920 captures one or more of the visual codes that have been displayed on the screen of the sending device. By converting the visual codes to corresponding strings and corresponding headers, the corresponding display blocks of the original data can be determined. Each header for a corresponding string includes a total count of display blocks and the corresponding display block number. As such, the receiving device 920 is configured to determine which display blocks (and corresponding visual codes) have been received.
At 1040, the receiving device generates a confirmation message indicating which display blocks (and corresponding visual codes) have been received. The confirmation message 1041 can be delivered to the sending device. In one embodiment, the confirmation message is delivered over a wireless communication link, as previously described. In
At 1050, the sending device 910 may modify the series of visual codes or the subset of the series that are displayed. In particular, the confirmation message indicates which visual codes have been read or received by the receiving device 920. As such, there is no need to continually send those received visual codes, and the sending device 910 may remove those visual codes from the series or subset of the series. The subset of a series may include the modification by removing received visual codes from the series or another subset. That is, the method includes iteratively capturing by the receiving device visual codes in a subset of the series of visual codes displayed in the repeating progression on the screen of the sending device, wherein visual codes corresponding to display blocks that have been received by the receiver are removed when generating the subset of the series of visual codes by the sending device. In that manner, the modified series or subset of the series of visual codes may then be displayed by the sending device 910 for capture by the receiving device.
Once all the visual codes have been captured in the series of visual codes, as indicated within a confirmation message, the sending device may terminate the display of visual codes at 1055. No further action is required of the user of the sending device. That is, the termination of the data transfer process is automatically performed between the two deices.
As shown, the sending device 910 displays a series of visual codes 1060. For example, the visual codes may be QR codes, or 2D bar codes, etc. In
The receiving device 920 receives one or more of the displayed visual codes. In addition, the receiving device 920 is configured to provide feedback to the sending device 910 as to which display blocks (per series or group within the series) have been received. In that manner, the sending device need not send visual codes corresponding to display blocks that have already been received. Because the visual codes are displayed by group, confirmation messages are generated in association with a corresponding group. For example, a confirmation message 1200′ may be delivered to the sending device for group 0, and confirmation message 1200″ may be delivered to the sending device for group 1, . . . and confirmation message 1200′″ may be delivered to the sending device for group n. A more detailed description of the confirmation message 1200 is provided in relation to
At 1105, the method includes assigning group numbers to each group of a total number of groups needed to transfer the total count of display blocks. The number of groups is dependent on the size of the confirmation message. For example, the confirmation message includes the beacon ID, a group number (if necessary), and information indicating whether a corresponding display block has been received. The number of groups, as well as the display blocks (and their respective visual codes), may be determined by an algorithm. If both the sending device and the receiving device use the same algorithm, then based on the number of visual codes (corresponding to display blocks) that are sent, the number of groups, and the display blocks within each group can be determined, in one embodiment. Also, both the sending device and receiving device can calculate using the same algorithm the number of bytes or bits required for the group numbers, as reserved in the confirmation message. In another embodiment, the group numbers and display blocks within the group numbers are provided by the sending device through a control message, as previously described. In particular, the sending device can advertise the number of bits that the sending device will be looking for in feedback messages. The sending device can format this information in the same visual code that it uses to communicate the Bluetooth beacon that it will be listening for.
At 1110, the method includes capturing a visual code. The visual code can be converted to a corresponding header and corresponding string. Through a two-way conversion process, a corresponding display block can be converted to the corresponding string and corresponding header, and further converted to a corresponding visual code.
At 1115, the group number for the captured visual code can be determined, and a display block for the captured visual code is also determined at 1120. This is determined from the respective header information obtained within the converted visual code. In particular, for each captured visual code, the header of the corresponding string is parsed. Based on the header, the ordered identifying display block number is determined and recorded. In addition, the corresponding string is converted to a corresponding display block. The header provides the total count of display blocks and the corresponding display block number. From the header, the group number can be determined (e.g., through algorithm, or control information).
At decision step 1125, the method determines whether all the display blocks in the group have been captured/received by the receiving device. Generally, the sending device will continue to display data blocks in the given group being displayed until all frames are set by the receiving device in the feedback/confirmation message. Then, the sending device will go to the next group and repeat the process for each group until all data blocks have been sent.
In particular, if there are more display blocks in the group to be received, then at 1135, a confirmation message is updated and/or generated to reflect that result. In particular, the confirmation message may be updated to reflect the current receipt status for the display block corresponding to the currently captured visual code. That is, the confirmation message includes information indicating which display blocks have been received and/or which display blocks have not been received. The confirmation message is periodically delivered from the receiving device. When transferring data between two devices, the process for providing feedback may be delivered through a wireless communication link (e.g., BLE, Bluetooth, NFC), or may be delivered over any other type of local communication medium (e.g., through displaying visual codes).
On the other hand, if the method determines that all display blocks in the current group have been received, then at decision step 1130, the method determines whether there are any remaining groups for which visual codes have not been displayed by the sending device. If there are remaining groups, then the process proceeds to 1135.
If there are no remaining groups, then all of the display blocks have been received, and an end confirmation message is generated and delivered at 1135′. The generation of the end confirmation message in 1135′ follows the same process as in 1135, except that the current receipt status for the display blocks in the last group show that all display blocks have been received. In one embodiment, the sending device tracks which groups have been handled, and is able to determine which confirmation is the last confirmation message (e.g., based on all visual codes in all groups being displayed and received).
In addition, the receiving device is configured to stop the iterative capturing when all of the display blocks in the series have been captured. The termination may be determined from information used to generate the confirmation message(s). Further, the receiving device is configured for coalescing the captured display blocks into the original data file, using an order determined by the ordered identifying block numbers. For example, the original data file may contain information about a user associated with the sending device, or information related to a document held by the user. The original data file may be an image file. For example, the original data file may be a government issued driver's license that is used for identifying the user. In another implementation, the original data file may include biometric data of the user. In addition, as previously described, the receiving device is configured to verify the integrity of the original data file using a seal that is accessed from a block chain database storing the seal. The seal being generated based on the contents of the original data file. Also, the receiving device is configured to verify the validity of the original data file using a certification accessed from the block chain database storing the certification. The certification being generated based on the contents of the original data file and the seal, wherein each of the operations is performed by an image-capturing device.
As previously described, at 1115 the receiving device determines a group number from the headers of one or more captured visual codes within a group. The header provides the total count of display blocks and the corresponding display block number. From the header, the group number can be determined (e.g., through algorithm, or control information).
At 1142, the method includes optionally setting bits in a first portion of the confirmation message to the group number. That is, if there is more than one group, then the confirmation message includes a first portion that indicates the relevant group number, to which the following status information applies.
At 1144, the method includes determining the bit number in the second portion of the confirmation message that corresponds to the captured visual code. Each bit in the second portion corresponds to a different display block (and respective visual code) in the group number.
At 1146, the method includes updating the confirmation message by setting the bit number in the second portion that corresponds to the captured visual code to a first value indicating the display block has been received by the receiving device. In particular, the information in the confirmation message, indicating which display blocks have been received, is configured to include ordered bits corresponding to the series of display blocks, wherein each bit in the second portion corresponds to a different display block. A first ordered bit corresponds to a first display block in the series of display blocks. A first bit value for the first ordered bit indicates that the first display block has been received, and a second bit value for the first ordered bit indicates that the first display block has not been received.
In another embodiment, when groups of the series of visual codes are implemented, the method includes determining a total count of display blocks from the corresponding header of the corresponding captured visual code. In addition, the method includes determining a number of groups needed to transfer the total count of display blocks based on a size of the confirmation message. The total number of groups as previously described may be determined from control information provided by the sending device, or determined through a common algorithm based on the total count of display blocks (and visual codes) and the size of the confirmation message. The method includes reserving a first portion of the confirmation message for identifying a group, and reserving a second portion of the confirmation message to correspond to display blocks in an identified group. The method includes iteratively capturing by the receiving device visual codes in the identified group of the series of visual codes displayed in the repeating progression on the screen of the sending device. In addition, for the identified group, the method includes setting a first bit in the second portion to a first value when a first display block corresponding to the first bit has been received, wherein the identified group includes the first display block. Also, for the identified group, the method includes setting the first bit to a second value when the first display block has not been received.
At 1150, the method includes displaying a series of visual codes or a subset of the series of visual codes in progressive fashion. That is the visual codes are displayed in a repeating progression on the screen of the sending device.
At 1155, the method includes receiving a confirmation message from the receiving device. The confirmation message may be delivered through a wireless communication link (e.g., Bluetooth, BLE, NFC, one-way link, etc.), or through a local communication medium (e.g., displaying visual codes). The confirmation message includes a first portion indicating which group number is relevant, and a second portion having receipt status for corresponding display blocks. As such, at 1160, the method includes determining a relevant group number from the confirmation message.
In addition, at 1165, the method includes determining which display blocks correspond to the group number. As previously described, the number of groups, as well as the display blocks (and their respective visual codes) belonging to each group, may be determined by an algorithm. If both the sending device and the receiving device use the same algorithm, then based on the number of visual codes (corresponding to display blocks) that are sent, the number of groups, and the display blocks within each group can be determined, in one embodiment.
For example, the algorithm used to determine the number of groups and for assigning display blocks to group may include determining the number of groups based on the size of the confirmation message. Once it is determined how many bits may be reserved for indicating receipt status for display blocks, the display blocks in the series may be allocated to each group. For illustration, groups are fully filled with corresponding display block status information, until reaching the last required group. The last required group handles the remainder of the display blocks, and may not be fully filled. In addition, not all groups may be used, depending on the total count of display blocks. In addition, some other algorithm may be used for allocating display blocks to groups. For example, there may an attempt to distribute the display blocks equally among the determined groups required to handle the total count of display blocks.
Further, at 1170, the method includes determining which display blocks and corresponding visual codes in the relevant group have been received. This is provided from the second portion of the confirmation message. In particular, the sending device may remove those visual codes that correspond to display blocks that have been received. As such, at 1175, the method includes modifying the series of visual codes or the subset of visual codes by removing the visual codes corresponding to display blocks that have been received. The subset of a series may include the modification by removing received visual codes from the series or another subset, as previously described. In that manner, the modified series or subset of the series of visual codes may then be displayed by the sending device 910 for capture by the receiving device.
At decision step 1180, the method determines whether all display blocks in the series of display blocks have been received by the receiving device. If not all display blocks in the group have been received, and/or if not all display blocks in all the groups have been received, then the process returns to 1150 to display the series of visual codes or subset of the series, now modified. On the other hand, if the method determines that all display blocks in the series of display blocks have been received, then the process terminates. Termination is based on the confirmation message or messages. In particular, the sending device will know when the last data block was received by the receiver based on the feedback/confirmation message. Once the sending device has received that message, the sending device can abort the process and either display a status or move to other functions.
As an example, if 3 bits are allocated to the group number, then 23 or 8 different groups can be represented. That would then leave 29 bits for the frames in each group. Hence, group 0, frame 0, is the very first block of data; group 0, frame 1, is the second block of data; group 1, frame 1 is then the 30th block of data (29 blocks in group zero and the first in group 1). This setup would allow for 8 groups and 29 frames for each group or 232 total blocks of data.
Since the sending device knows how many blocks of data are necessary to complete the transmission of the data set, the sending device can optimize by dynamically determining the number of bits needed for the group number. For example, if there are only 20 display blocks of data, the sending device can allocate zero bits for the group number (for a total count of display blocks less than or equal to 32), as shown in
As depicted in
As also depicted in
In another example embodiment, the software on the mobile (or sending device) might generate the visual code directly from a display block with a header, e.g., if the standard for the visual code allows for such a conversion. In that example embodiment, the software on the image-scanning device might convert the visual code directly back into a display block before checking for duplication or removing the header and recording the block number.
In an example embodiment, one or more of the above operations might be performed by a separate thread in a process, in order for the operations to occur in parallel. Also, in an example embodiment, the display block might not be converted into a string prior to generation of a visual code for the display block. For example, the display block might be converted into a byte array or some other byte grouping. In that event, the visual code would not need to be converted back into a string before conversion back into a display block.
Moreover, in an example embodiment the original data file might not be an image file. Rather the original data file might be non-image biometric data concerning the user of the mobile device, e.g., fingerprint, voice data, and/or genetic-identification data.
As depicted in
As also depicted in
In another example embodiment, the software on the mobile (or sending device) might generate the visual code directly from a display block with a header, e.g., if the standard for the visual code allows for such a conversion. In that example embodiment, the software on the image-scanning device might convert the visual code directly back into a display block before checking for duplication or removing the header and recording the block number.
In an example embodiment, one or more of the above operations might be performed by a separate thread in a process, in order for the operations to occur in parallel. Also, in an example embodiment, the display block might not be converted into a string prior to generation of a visual code for the display block. For example, the display block might be converted into a byte array or some other byte grouping. In that event, the visual code would not need to be converted back into a string before conversion back into a display block.
Moreover, in an example embodiment the original data file might not be an image file. Rather the original data file might be non-image biometric data concerning the user of the mobile device, e.g., fingerprint, voice data, and/or genetic-identification data.
As depicted in
As also depicted in
In another example embodiment, the software on the mobile (or sending device) might generate the visual code directly from a display block with a header, e.g., if the standard for the visual code allows for such a conversion. In that example embodiment, the software on the image-scanning device might convert the visual code directly back into a display block before checking for duplication or removing the header and recording the block number.
In an example embodiment, one or more of the above operations might be performed by a separate thread in a process, in order for the operations to occur in parallel. Also, in an example embodiment, the display block might not be converted into a string prior to generation of a visual code for the display block. For example, the display block might be converted into a byte array or some other byte grouping. In that event, the visual code would not need to be converted back into a string before conversion back into a display block.
Moreover, in an example embodiment the original data file might not be an image file. Rather the original data file might be non-image biometric data concerning the user of the mobile device, e.g., fingerprint, voice data, and/or genetic-identification data.
As depicted in
As also depicted in
In another example embodiment, the software on the mobile (or sending device) might generate the visual code directly from a display block with a header, e.g., if the standard for the visual code allows for such a conversion. In that example embodiment, the software on the image-scanning device might convert the visual code directly back into a display block before checking for duplication or removing the header and recording the block number.
In an example embodiment, one or more of the above operations might be performed by a separate thread in a process, in order for the operations to occur in parallel. Also, in an example embodiment, the display block might not be converted into a string prior to generation of a visual code for the display block. For example, the display block might be converted into a byte array or some other byte grouping. In that event, the visual code would not need to be converted back into a string before conversion back into a display block.
Moreover, in an example embodiment the original data file might not be an image file. Rather the original data file might be non-image biometric data concerning the user of the mobile device, e.g., fingerprint, voice data, and/or genetic-identification data.
As depicted in
As also depicted in
In another example embodiment, the software on the mobile (or sending device) might generate the visual code directly from a display block with a header, e.g., if the standard for the visual code allows for such a conversion. In that example embodiment, the software on the image-scanning device might convert the visual code directly back into a display block before checking for duplication or removing the header and recording the block number.
In an example embodiment, one or more of the above operations might be performed by a separate thread in a process, in order for the operations to occur in parallel. Also, in an example embodiment, the display block might not be converted into a string prior to generation of a visual code for the display block. For example, the display block might be converted into a byte array or some other byte grouping. In that event, the visual code would not need to be converted back into a string before conversion back into a display block.
Moreover, in an example embodiment the original data file might not be an image file. Rather the original data file might be non-image biometric data concerning the user of the mobile device, e.g., fingerprint, voice data, and/or genetic-identification data.
As depicted in
As also depicted in
In another example embodiment, the software on the mobile (or sending device) might generate the visual code directly from a display block with a header, e.g., if the standard for the visual code allows for such a conversion. In that example embodiment, the software on the image-scanning device might convert the visual code directly back into a display block before checking for duplication or removing the header and recording the block number.
In an example embodiment, one or more of the above operations might be performed by a separate thread in a process, in order for the operations to occur in parallel. Also, in an example embodiment, the display block might not be converted into a string prior to generation of a visual code for the display block. For example, the display block might be converted into a byte array or some other byte grouping. In that event, the visual code would not need to be converted back into a string before conversion back into a display block.
Moreover, in an example embodiment the original data file might not be an image file. Rather the original data file might be non-image biometric data concerning the user of the mobile device, e.g., fingerprint, voice data, and/or genetic-identification data.
As depicted in
As also depicted in
In another example embodiment, the software on the mobile (or sending device) might generate the visual code directly from a display block with a header, e.g., if the standard for the visual code allows for such a conversion. In that example embodiment, the software on the image-scanning device might convert the visual code directly back into a display block before checking for duplication or removing the header and recording the block number.
In an example embodiment, one or more of the above operations might be performed by a separate thread in a process, in order for the operations to occur in parallel. Also, in an example embodiment, the display block might not be converted into a string prior to generation of a visual code for the display block. For example, the display block might be converted into a byte array or some other byte grouping. In that event, the visual code would not need to be converted back into a string before conversion back into a display block.
Moreover, in an example embodiment the original data file might not be an image file. Rather the original data file might be non-image biometric data concerning the user of the mobile device, e.g., fingerprint, voice data, and/or genetic-identification data.
While specific embodiments have been provided for transferring large data sets and providing feedback indicating which portions of the data sets have been received, these are described by way of example and not by way of limitation. Those skilled in the art having read the present disclosure will realize additional embodiments falling within the spirit and scope of the present disclosure.
The various embodiments defined herein may define individual implementations or can define implementations that rely on combinations of one or more of the defined embodiments. Further, embodiments of the present invention may be practiced with various computer system configurations including hand-held devices, microprocessor systems, microprocessor-based or programmable consumer electronics, minicomputers, mainframe computers and the like. The invention can also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a wire-based or wireless network.
Any of the operations described herein that form part of the invention are useful machine operations. The invention also relates to a device or an apparatus for performing these operations. The apparatus can be specially constructed for the required purpose, or the apparatus can be a general-purpose computer selectively activated or configured by a computer program stored in the computer. In particular, various general-purpose machines can be used with computer programs written in accordance with the teachings herein, or it may be more convenient to construct a more specialized apparatus to perform the required operations.
The invention can also be embodied as computer readable code on a computer readable medium. The computer readable medium is any data storage device that can store data, which can thereafter be read by a computer system. The computer readable medium can also be distributed over a network-coupled computer system so that the computer readable code is stored and executed in a distributed fashion.
Having provided this detailed description, it will be apparent that modifications and variations are possible without departing from the scope of the invention defined in the appended claims. When introducing elements of the present invention or the preferred embodiments(s) thereof, the articles “a”, “an”, “the” and “said” are intended to mean that there are one or more of the elements. The terms “comprising”, “including” and “having” are intended to be inclusive and mean that there may be additional elements other than the listed elements. In view of the above, it will be seen that the several objects of the invention are achieved and other advantageous results attained. As various changes could be made in the above systems without departing from the scope of the invention, it is intended that all matter contained in the above description and shown in the accompanying drawings shall be interpreted as illustrative and not in a limiting sense.
With the above embodiments in mind, it should be understood that the inventions might employ various computer-implemented operations involving data stored in computer systems. Any of the operations described herein that form part of the inventions are useful machine operations. The inventions also relate to a device or an apparatus for performing these operations. The apparatus may be specially constructed for the required purposes, such as the carrier network discussed above, or it may be a general purpose computer selectively activated or configured by a computer program stored in the computer. In particular, various general purpose machines may be used with computer programs written in accordance with the teachings herein, or it may be more convenient to construct a more specialized apparatus to perform the required operations.
The inventions can also be embodied as computer readable code on a computer readable medium. The computer readable medium is any data storage device that can store data, which can thereafter be read by a computer system. Examples of the computer readable medium include hard drives, network attached storage (NAS), read-only memory, random-access memory, CD-ROMs, CD-Rs, CD-RWs, DVDs, Flash, magnetic tapes, and other optical and non-optical data storage devices. The computer readable medium can also be distributed over a network coupled computer systems so that the computer readable code is stored and executed in a distributed fashion.
Although the method operations were described in a specific order, it should be understood that other housekeeping operations may be performed in between operations, or operations may be adjusted so that they occur at slightly different times, or may be distributed in a system which allows the occurrence of the processing operations at various intervals associated with the processing, as long as the processing of the overlay operations are performed in the desired way.
Although example embodiments of the inventions have been described in some detail for purposes of clarity of understanding, it will be apparent that certain changes and modifications can be practiced within the scope of the following claims. Moreover, the operations described above can be ordered, modularized, and/or distributed in any suitable way. Accordingly, the present embodiments are to be considered as illustrative and not restrictive, and the inventions are not to be limited to the details given herein, but may be modified within the scope and equivalents of the following claims. In the following claims, elements and/or steps do not imply any particular order of operation, unless explicitly stated in the claims or implicitly required by the disclosure.
This application is a continuation of U.S. patent application Ser. No. 16/697,110, filed Nov. 26, 2019, entitled “Large Data Transfer Using Visual Codes with Feedback Confirmation”, now U.S. Pat. No. 11,062,106, which is a continuation of and claims priority to and the benefit of U.S. patent application Ser. No. 15/784,093, filed on Oct. 14, 2017, entitled “Large Data Transfer Using Visual Codes with Feedback Confirmation”, now U.S. Pat. No. 10,509,932, which claims priority to and the benefit of U.S. Provisional Patent Application No. 62/408,699, filed on Oct. 14, 2016, entitled “Large Data Transfer Using Visual Codes and Feedback,” the contents of each of which are herein incorporated by reference in their entireties. U.S. patent application Ser. No. 15/784,093 is a continuation-in-part of and claims priority to and the benefit of U.S. patent application Ser. No. 15/208,580, filed on Jul. 12, 2016, entitled “Transferring Data Files Using a Series of Visual Codes,” now U.S. Pat. No. 10,007,826, which claims priority to and the benefit of U.S. Provisional Patent Application No. 62/304,934, filed on Mar. 7, 2016, entitled “An Identity Management Service Using a Block Chain,” the contents of each of which are herein incorporated by reference in their entireties. This application is related to U.S. patent application Ser. No. 15/146,872, entitled “Identity Management Service Using a Block Chain Providing Identity Transactions Between Devices”, filed on May 4, 2016 (now U.S. Pat. No. 10,007,913), U.S. patent application Ser. No. 15/146,872, entitled “Identity Management Service Using a Block Chain Providing Identity Transactions Between Devices”, filed on May 4, 2016; U.S. patent application Ser. No. 15/146,881, entitled “Identity Management Service Using a Block Chain Providing Identity Certification Transactions Between Devices”, filed on May 4, 2016 (now U.S. Pat. No. 9,722,790), PCT Application No. PCT/US16/30863, entitled “Identity Management Service Using a Block Chain Providing Interaction Transactions Between Devices”, filed on May 4, 2016, and U.S. patent application Ser. No. 15/147,838, entitled “User Identification Management System and Method”, filed on May 5, 2016 (now U.S. Pat. No. 9,876,646). The disclosure of each of the applications identified above is incorporated herein by reference for all purposes.
Number | Name | Date | Kind |
---|---|---|---|
5005200 | Fischer | Apr 1991 | A |
5901229 | Fujisaki et al. | May 1999 | A |
5923763 | Walker et al. | Jul 1999 | A |
5996076 | Rowney et al. | Nov 1999 | A |
6088717 | Reed et al. | Jul 2000 | A |
6310966 | Dulude et al. | Oct 2001 | B1 |
6785815 | Serret-Avila et al. | Aug 2004 | B1 |
6792536 | Teppler | Sep 2004 | B1 |
7043635 | Keech | May 2006 | B1 |
7225161 | Lam et al. | May 2007 | B2 |
7451116 | Parmelee et al. | Nov 2008 | B2 |
7502467 | Brainard et al. | Mar 2009 | B2 |
7873573 | Realini | Jan 2011 | B2 |
8056822 | Bourrieres et al. | Nov 2011 | B2 |
8078880 | Nanda et al. | Dec 2011 | B2 |
8249965 | Tumminaro | Aug 2012 | B2 |
8502060 | Ribner | Aug 2013 | B2 |
8607358 | Shankar et al. | Dec 2013 | B1 |
8744076 | Youn | Jun 2014 | B2 |
8832807 | Kuo et al. | Sep 2014 | B1 |
8966276 | Nanopoulos et al. | Feb 2015 | B2 |
9059858 | Giardina et al. | Jun 2015 | B1 |
9135787 | Russell et al. | Sep 2015 | B1 |
9172699 | Vazquez et al. | Oct 2015 | B1 |
9189788 | Robinson et al. | Nov 2015 | B1 |
9240058 | Amacker et al. | Jan 2016 | B1 |
9288047 | Brouwer et al. | Mar 2016 | B2 |
9331856 | Song | May 2016 | B1 |
9397985 | Seger, II et al. | Jul 2016 | B1 |
9608822 | Lochmatter et al. | Mar 2017 | B2 |
9646150 | Toth | May 2017 | B2 |
9679276 | Cuende | Jun 2017 | B1 |
9722790 | Ebrahimi | Aug 2017 | B2 |
9876646 | Ebrahimi et al. | Jan 2018 | B2 |
9887975 | Gifford et al. | Feb 2018 | B1 |
9948467 | King | Apr 2018 | B2 |
10007826 | Ebrahimi et al. | Jun 2018 | B2 |
10007913 | Ebrahimi | Jun 2018 | B2 |
10163105 | Ziraknejad et al. | Dec 2018 | B1 |
10255419 | Kragh | Apr 2019 | B1 |
10257179 | Saylor et al. | Apr 2019 | B1 |
10341091 | Keranen et al. | Jul 2019 | B2 |
10341123 | Ebrahimi et al. | Jul 2019 | B2 |
10417219 | Yang et al. | Sep 2019 | B1 |
10498541 | Ebrahimi et al. | Dec 2019 | B2 |
10498542 | Ebrahimi et al. | Dec 2019 | B2 |
10509932 | Ebrahimi et al. | Dec 2019 | B2 |
10587609 | Ebrahimi et al. | Mar 2020 | B2 |
10657532 | Ebrahimi | May 2020 | B2 |
10740584 | Ebrahimi et al. | Aug 2020 | B2 |
10805085 | Liang | Oct 2020 | B1 |
10979227 | Ebrahimi | Apr 2021 | B2 |
11062106 | Ebrahimi et al. | Jul 2021 | B2 |
11082221 | Ebrahimi et al. | Aug 2021 | B2 |
11134075 | Ebrahimi et al. | Sep 2021 | B2 |
11206133 | Ebrahimi et al. | Dec 2021 | B2 |
11263415 | Ebrahimi et al. | Mar 2022 | B2 |
11323272 | Ebrahimi et al. | May 2022 | B2 |
20010011350 | Zabetian | Aug 2001 | A1 |
20020016913 | Wheeler et al. | Feb 2002 | A1 |
20020071565 | Kurn et al. | Jun 2002 | A1 |
20020138735 | Felt et al. | Sep 2002 | A1 |
20020141593 | Kurn et al. | Oct 2002 | A1 |
20020170966 | Hannigan et al. | Nov 2002 | A1 |
20030014372 | Wheeler et al. | Jan 2003 | A1 |
20030046159 | Ebrahimi et al. | Mar 2003 | A1 |
20030070075 | Deguillaume et al. | Apr 2003 | A1 |
20030172273 | Hans | Sep 2003 | A1 |
20040064693 | Pabla et al. | Apr 2004 | A1 |
20050039040 | Ransom et al. | Feb 2005 | A1 |
20050091495 | Cameron et al. | Apr 2005 | A1 |
20050114447 | Cameron et al. | May 2005 | A1 |
20060041756 | Ashok et al. | Feb 2006 | A1 |
20060071077 | Suomela et al. | Apr 2006 | A1 |
20060075255 | Duffy et al. | Apr 2006 | A1 |
20060256961 | Brainard et al. | Nov 2006 | A1 |
20070016785 | Guay et al. | Jan 2007 | A1 |
20070017996 | Xia et al. | Jan 2007 | A1 |
20070033150 | Nwosu | Feb 2007 | A1 |
20070118479 | Halsema et al. | May 2007 | A1 |
20070277013 | Rexha et al. | Nov 2007 | A1 |
20070294538 | Lim et al. | Dec 2007 | A1 |
20080078836 | Tomita | Apr 2008 | A1 |
20080116277 | Tomita | May 2008 | A1 |
20080155253 | Liu | Jun 2008 | A1 |
20080178008 | Takahashi et al. | Jul 2008 | A1 |
20080235772 | Janzen | Sep 2008 | A1 |
20080267511 | Bourrieres et al. | Oct 2008 | A1 |
20090066478 | Colella | Mar 2009 | A1 |
20090132813 | Schibuk | May 2009 | A1 |
20090232346 | Zilch | Sep 2009 | A1 |
20090266882 | Sajkowsky | Oct 2009 | A1 |
20100020970 | Liu et al. | Jan 2010 | A1 |
20100023758 | Han et al. | Jan 2010 | A1 |
20100052852 | Mohanty | Mar 2010 | A1 |
20100070759 | Leon Cobos et al. | Mar 2010 | A1 |
20100088517 | Piersol | Apr 2010 | A1 |
20100100724 | Kaliski, Jr. | Apr 2010 | A1 |
20100191972 | Kiliccote | Jul 2010 | A1 |
20100228674 | Ogg et al. | Sep 2010 | A1 |
20100250939 | Adams et al. | Sep 2010 | A1 |
20100272193 | Khan et al. | Oct 2010 | A1 |
20110093249 | Holmes et al. | Apr 2011 | A1 |
20110121066 | Tian et al. | May 2011 | A1 |
20110231913 | Feng et al. | Sep 2011 | A1 |
20110286595 | Resch et al. | Nov 2011 | A1 |
20110302412 | Deng et al. | Dec 2011 | A1 |
20110307703 | Ogg et al. | Dec 2011 | A1 |
20120061461 | Bourrieres et al. | Mar 2012 | A1 |
20120067943 | Saunders et al. | Mar 2012 | A1 |
20120086971 | Bisbee et al. | Apr 2012 | A1 |
20120125997 | Burra et al. | May 2012 | A1 |
20120137131 | Lu et al. | May 2012 | A1 |
20120185398 | Weis et al. | Jul 2012 | A1 |
20120211567 | Herzig | Aug 2012 | A1 |
20120297190 | Shen et al. | Nov 2012 | A1 |
20120297464 | Busch et al. | Nov 2012 | A1 |
20120308003 | Mukherjee | Dec 2012 | A1 |
20130010958 | Yao | Jan 2013 | A1 |
20130014152 | Johnson et al. | Jan 2013 | A1 |
20130037607 | Bullwinkel | Feb 2013 | A1 |
20130065564 | Conner et al. | Mar 2013 | A1 |
20130111208 | Sabin et al. | May 2013 | A1 |
20130145152 | Maino et al. | Jun 2013 | A1 |
20130153666 | Edwards | Jun 2013 | A1 |
20130159021 | Felsher | Jun 2013 | A1 |
20130173915 | Haulund | Jul 2013 | A1 |
20130198822 | Hitchcock et al. | Aug 2013 | A1 |
20130228624 | Byrd et al. | Sep 2013 | A1 |
20130246261 | Purves et al. | Sep 2013 | A1 |
20130262309 | Gadotti | Oct 2013 | A1 |
20130262857 | Neuman et al. | Oct 2013 | A1 |
20130290733 | Branton et al. | Oct 2013 | A1 |
20130305059 | Gormley et al. | Nov 2013 | A1 |
20130311768 | Fosmark et al. | Nov 2013 | A1 |
20130318354 | Entschew et al. | Nov 2013 | A1 |
20130333009 | Mackler | Dec 2013 | A1 |
20140001253 | Smith | Jan 2014 | A1 |
20140006247 | Chai et al. | Jan 2014 | A1 |
20140006806 | Corella et al. | Jan 2014 | A1 |
20140032913 | Tenenboym et al. | Jan 2014 | A1 |
20140084067 | Vanderhulst | Mar 2014 | A1 |
20140093144 | Feekes | Apr 2014 | A1 |
20140208403 | Lu et al. | Jul 2014 | A1 |
20140223175 | Bhatnagar | Aug 2014 | A1 |
20140237565 | Fleysher | Aug 2014 | A1 |
20140254796 | Li et al. | Sep 2014 | A1 |
20140256423 | Williams et al. | Sep 2014 | A1 |
20140282961 | Dorfman et al. | Sep 2014 | A1 |
20140289842 | Cornick et al. | Sep 2014 | A1 |
20140304517 | Chidambaram et al. | Oct 2014 | A1 |
20140344015 | Puertolas-Montanes et al. | Nov 2014 | A1 |
20150019456 | Smith | Jan 2015 | A1 |
20150047000 | Spencer, III et al. | Feb 2015 | A1 |
20150081567 | Boyle et al. | Mar 2015 | A1 |
20150095352 | Lacey | Apr 2015 | A1 |
20150095999 | Toth | Apr 2015 | A1 |
20150104013 | Holman et al. | Apr 2015 | A1 |
20150106626 | Kremp et al. | Apr 2015 | A1 |
20150178515 | Cooley et al. | Jun 2015 | A1 |
20150244690 | Mossbarger | Aug 2015 | A1 |
20150262138 | Hudon | Sep 2015 | A1 |
20150269389 | Lee | Sep 2015 | A1 |
20150269614 | Kramer | Sep 2015 | A1 |
20150278805 | Spencer, III et al. | Oct 2015 | A1 |
20150278820 | Meadows | Oct 2015 | A1 |
20150302401 | Metral | Oct 2015 | A1 |
20150332283 | Witchey | Nov 2015 | A1 |
20150356523 | Madden | Dec 2015 | A1 |
20150356555 | Pennanen | Dec 2015 | A1 |
20150365436 | Shenefiel et al. | Dec 2015 | A1 |
20160005032 | Yau et al. | Jan 2016 | A1 |
20160012465 | Sharp | Jan 2016 | A1 |
20160028452 | Chu et al. | Jan 2016 | A1 |
20160028552 | Spanos et al. | Jan 2016 | A1 |
20160072800 | Soon-Shiong et al. | Mar 2016 | A1 |
20160094348 | Takahashi | Mar 2016 | A1 |
20160098723 | Feeney | Apr 2016 | A1 |
20160098730 | Feeney | Apr 2016 | A1 |
20160125416 | Spencer et al. | May 2016 | A1 |
20160134593 | Gvili | May 2016 | A1 |
20160162897 | Feeney | Jun 2016 | A1 |
20160180338 | Androulaki et al. | Jun 2016 | A1 |
20160203572 | Mcconaghy et al. | Jul 2016 | A1 |
20160212146 | Wilson | Jul 2016 | A1 |
20160217356 | Wesby | Jul 2016 | A1 |
20160217436 | Brama | Jul 2016 | A1 |
20160261411 | Yau et al. | Sep 2016 | A1 |
20160269403 | Koutenaei et al. | Sep 2016 | A1 |
20160283920 | Fisher et al. | Sep 2016 | A1 |
20160283939 | Finlow-Bates | Sep 2016 | A1 |
20160283941 | Andrade | Sep 2016 | A1 |
20160300234 | Moss-Pultz et al. | Oct 2016 | A1 |
20160314462 | Hong et al. | Oct 2016 | A1 |
20160328713 | Ebrahimi | Nov 2016 | A1 |
20160330027 | Ebrahimi | Nov 2016 | A1 |
20160330035 | Ebrahimi et al. | Nov 2016 | A1 |
20160337351 | Spencer et al. | Nov 2016 | A1 |
20160351080 | Bhatnagar et al. | Dec 2016 | A1 |
20160373440 | Mather et al. | Dec 2016 | A1 |
20170041296 | Ford et al. | Feb 2017 | A1 |
20170048252 | Straub et al. | Feb 2017 | A1 |
20170085377 | Pogmore et al. | Mar 2017 | A1 |
20170103389 | Sorensen et al. | Apr 2017 | A1 |
20170109735 | Sheng et al. | Apr 2017 | A1 |
20170180128 | Lu | Jun 2017 | A1 |
20170200160 | Kumar et al. | Jul 2017 | A1 |
20170228731 | Sheng et al. | Aug 2017 | A1 |
20170236121 | Lyons et al. | Aug 2017 | A1 |
20170255805 | Ebrahimi et al. | Sep 2017 | A1 |
20170257358 | Ebrahimi et al. | Sep 2017 | A1 |
20170279788 | Rosenblum et al. | Sep 2017 | A1 |
20170324711 | Feeney et al. | Nov 2017 | A1 |
20170344987 | Davis | Nov 2017 | A1 |
20170344988 | Cusden et al. | Nov 2017 | A1 |
20170346851 | Drake | Nov 2017 | A1 |
20170357826 | Gouget et al. | Dec 2017 | A1 |
20170359723 | Pal et al. | Dec 2017 | A1 |
20170372300 | Dunlevy et al. | Dec 2017 | A1 |
20180068103 | Pitkanen et al. | Mar 2018 | A1 |
20180077144 | Gangawane et al. | Mar 2018 | A1 |
20180082050 | Flink et al. | Mar 2018 | A1 |
20180082256 | Tummuru et al. | Mar 2018 | A1 |
20180144153 | Pead | May 2018 | A1 |
20180173906 | Rodriguez et al. | Jun 2018 | A1 |
20180176017 | Rodriguez et al. | Jun 2018 | A1 |
20180205556 | Rieul | Jul 2018 | A1 |
20180240107 | Andrade | Aug 2018 | A1 |
20180248699 | Andrade | Aug 2018 | A1 |
20180308098 | Ebrahimi | Oct 2018 | A1 |
20180343120 | Andrade | Nov 2018 | A1 |
20180359103 | Geupel | Dec 2018 | A1 |
20190005470 | Uhr et al. | Jan 2019 | A1 |
20190149537 | Ebrahimi et al. | May 2019 | A1 |
20190163896 | Balaraman et al. | May 2019 | A1 |
20190182042 | Ebrahimi et al. | Jun 2019 | A1 |
20190228178 | Sharma et al. | Jul 2019 | A1 |
20190342096 | Starosielsky et al. | Nov 2019 | A1 |
20200127826 | Ebrahimi et al. | Apr 2020 | A1 |
20200127832 | Ebrahimi | Apr 2020 | A1 |
20200145219 | Sebastian et al. | May 2020 | A1 |
20200186505 | Amar et al. | Jun 2020 | A1 |
20200265202 | Ebrahimi et al. | Aug 2020 | A1 |
20200267003 | Ebrahimi et al. | Aug 2020 | A1 |
20200344062 | Haldar et al. | Oct 2020 | A1 |
20210064780 | Riedel et al. | Mar 2021 | A1 |
20210192166 | Ebrahimi et al. | Jun 2021 | A1 |
20220029799 | Subudhi et al. | Jan 2022 | A1 |
20220029802 | Ebrahimi et al. | Jan 2022 | A1 |
20220029807 | Ebrahimi | Jan 2022 | A1 |
20220078178 | Ebrahimi et al. | Mar 2022 | A1 |
Number | Date | Country |
---|---|---|
2005260490 | Sep 2005 | JP |
2006179016 | Jul 2006 | JP |
2008518335 | May 2008 | JP |
2012114907 | Jun 2012 | JP |
20130055794 | May 2013 | KR |
WO-2016179334 | Nov 2016 | WO |
WO-2017152150 | Sep 2017 | WO |
WO-2018145127 | Aug 2018 | WO |
WO-2019113552 | Jun 2019 | WO |
Entry |
---|
Barreto, P. S. L. M. et al., (2001) “Fast hashing onto elliptic curves over fields of characteristic 3,” [Online], Cryptology ePrint Archive: Report 2001/098, Retrieved from the Internet: URL: https://eprint.iacr.org/2001/098/, 12 pages. |
Biggs, J., “Your Next Passport Could Be on the Blockchain”, Oct. 31, 2014, 6 pages. |
Boneh, D. et al., (2001) “Short signatures from the Weil pairing,” International Conference on the Theory and Application of Cryptology and Information Security, ASIACRYPT 2001: Advances in Cryptology, [Online], Retrieved from the Internet: URL: https://www.iacr.org/archive/asiacrypt2001/22480516.pdf, pp. 516-534. |
Dillet, R., “Stampery Now Lets You Certify Documents Using the Blockchain and Your Real Identity,” Nov. 20, 2015, 6 pages. |
Drew-Cordell, “Developer Creates Blockchain Passport Technology Based on Bitcoin”, Oct. 31, 2014, 16 pages. |
Ellis, C., “Create Your Own Blockchain ID”, Bitnation, Oct. 24, 2014, 14 pages. |
Ellison, C. et al., (2000) “Ten risks of PKI: What you're not being told about public key infrastructure,” Computer Security Journal, vol. 16, No. 1, pp. 1-8. |
Extended European Search Report for European Application No. 16790050.5, dated Apr. 26, 2018, 10 pages. |
Extended European Search Report for European Application No. 17760964.1, dated Oct. 24, 2019, 11 pages. |
Extended European Search Report for European Application No. 18885688.4, dated Jul. 23, 2021, 5 pages. |
Github, Inc., “World Citizenship, Creating Affordable Decentralised Passport Services Using Available Cryptographic Tools,” (Oct. 2014), Retrieved from the Internet on Nov. 17, 2017, URL: https://github.com/MrChrisJ/World-Citizenship, 12 pages. |
Gupta, V., “State in a Box—Identity Services Architecture,” CheapID, 2006-2009, 42 pages. |
International Search Report and Written Opinion for International Application No. PCT/US2016/030863, dated Sep. 14, 2016, 9 pages. |
International Search Report and Written Opinion for International Application No. PCT/US2017/020829, dated Jul. 17, 2017, 12 pages. |
International Search Report and Written Opinion for International Application No. PCT/US2018/017136, dated Apr. 26, 2018, 12 pages. |
International Search Report and Written Opinion for International Application No. PCT/US2018/064623, dated May 14, 2019, 7 pages. |
Kirk, J., “Could the Bitcoin network be used as an ultrasecure notary service?”, IDG News Service, Computerworld, Inc., May 23, 2013, 3 pages. |
Menezes, A. J. et al., Chapter 9: Handbook of Applied Cryptography, CRC Press, Boca Raton, FL, pp. 321-383 (Oct. 1996). |
Nakamoto: “Bitcoin: A Peer-to-Peer Electronic Cash System,”Jan. 13, 2009 (Jan. 13, 2009), Retrieved from the Internet: URL:https://web.archive.org/web/20090131115053/http://www.bitcoin.org/bitcoin.pdf [retrieved on Jun. 30, 2017], 9 pages. |
Notice of Reasons for Refusal for Japanese Application No. 2018-510317, dated Sep. 1, 2020, 6 pages. |
Office Action for European Application No. 16790050.5, dated Nov. 21, 2019, 7 pages. |
Office Action for European Application No. 17760964.1, dated Oct. 20, 2020, 3 pages. |
Office Action for U.S. Appl. No. 15/146,872, dated Jun. 15, 2017, 12 pages. |
Office Action for U.S. Appl. No. 15/146,872, dated Sep. 27, 2016, 8 pages. |
Office Action for U.S. Appl. No. 15/146,881, dated Oct. 13, 2016, 8 pages. |
Office Action for U.S. Appl. No. 15/208,580, dated Jul. 7, 2017, 9 pages. |
Office Action for U.S. Appl. No. 15/208,580, dated Mar. 21, 2017, 8 pages. |
Office Action for U.S. Appl. No. 15/208,580, dated Oct. 25, 2017, 9 pages. |
Office Action for U.S. Appl. No. 15/449,902, dated Jun. 19, 2019, 10 pages. |
Office Action for U.S. Appl. No. 15/640,795, dated May 24, 2019, 8 pages. |
Office Action for U.S. Appl. No. 15/784,093, dated Apr. 15, 2019, 11 pages. |
Office Action for U.S. Appl. No. 15/784,093, dated Nov. 29, 2018, 9 pages. |
Office Action for U.S. Appl. No. 15/878,353, dated Aug. 8, 2018, 14 pages. |
Office Action for U.S. Appl. No. 16/018,773, dated Apr. 30, 2021, 45 pages. |
Office Action for U.S. Appl. No. 16/018,773, dated Jul. 28, 2020, 34 pages. |
Office Action for U.S. Appl. No. 16/019,411, dated Apr. 12, 2019, 12 pages. |
Office Action for U.S. Appl. No. 16/019,411, dated Sep. 16, 2019, 9 pages. |
Office Action for U.S. Appl. No. 16/214,029, dated Apr. 30, 2021, 22 pages. |
Office Action for U.S. Appl. No. 16/214,029, dated Oct. 22, 2020, 24 pages. |
Office Action for U.S. Appl. No. 16/227,632, dated Oct. 27, 2020, 9 pages. |
Office Action for U.S. Appl. No. 16/656,459, dated Sep. 24, 2020, 8 pages. |
Office Action for U.S. Appl. No. 16/697,110, dated Aug. 4, 2020, 7 pages. |
Office Action for U.S. Appl. No. 16/697,131, dated Apr. 26, 2021, 6 pages. |
Office Action for U.S. Appl. No. 16/697,131, dated Oct. 16, 2020, 12 pages. |
Office Action for U.S. Appl. No. 16/986,817, dated Apr. 6, 2021, 10 pages. |
Extended European Search Report for European Application No. 21181229.2, dated Jan. 14, 2022, 9 pages. |
Office Action for U.S. Appl. No. 17/738,106, dated Jul. 11, 2022, 11 pages. |
Panchamia, S. et al., “Passport, VISA and Immigration Management using Blockchain,” 2017 23rd Annual Conference on Advanced Computing and Communications, 2018, 10 pages. |
Stallings, W, Chapter 10: Digital Signatures and Authentication Protocols In: Cryptography and network security: Principles and Practice, Second Edition, p. 299-314, 1998. |
Number | Date | Country | |
---|---|---|---|
20210406495 A1 | Dec 2021 | US |
Number | Date | Country | |
---|---|---|---|
62408699 | Oct 2016 | US | |
62304934 | Mar 2016 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 16697110 | Nov 2019 | US |
Child | 17370731 | US | |
Parent | 15784093 | Oct 2017 | US |
Child | 16697110 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 15208580 | Jul 2016 | US |
Child | 15784093 | US |