The present disclosure relates to data protection and more specifically to combining a user PIN with a unique, device-specific identifier.
As more and more individuals and enterprises rely on smartphones and other mobile devices storing confidential or sensitive information, security is an increasing concern. Because such mobile devices are used as communication centers, they frequently contain sensitive information such as contact information, call logs, emails, pictures, and so forth, of high potential value and/or sensitivity. In certain applications, protecting this information is desirable. In some applications, encryption is used to protect sensitive information.
Encryption is the process of transforming a message into ciphertext that cannot be understood by unintended recipients. A message is encrypted with an encryption algorithm and encryption key. Decryption is the process of transforming ciphertext back to the message in a readable or understandable form.
In many cases, users select short personal identification numbers (PINs) or passwords which an attacker can easily compromise with a brute force attack running on a modestly powerful computer or group of computers. However, users are often reluctant to select a longer password because longer passwords are more difficult to remember, or users are unable to select a longer password because the system limits the password length. What is needed in the art is an improved approach for protecting content based on a user password.
Additional features and advantages of the disclosure will be set forth in the description which follows, and in part will be obvious from the description, or can be learned by practice of the herein disclosed principles. The features and advantages of the disclosure can be realized and obtained by means of the instruments and combinations particularly pointed out in the appended claims. These and other features of the disclosure will become more fully apparent from the following description and appended claims, or can be learned by the practice of the principles set forth herein.
Disclosed are systems, methods, and non-transitory computer-readable storage media for file-level data protection, specifically encryption and key management. A system practicing the method encrypts each file with a unique file encryption key, encrypts each file encryption key with a class encryption key, and encrypts each class encryption key with an additional encryption key.
In one embodiment, the system encrypts a credential keychain. A credential keychain can be a database or files that store credentials. The system encrypts at least a subset of credentials with a unique credential encryption key, encrypts each unique credential encryption key with a class encryption key and encrypts each credential class encryption key with an additional encryption key.
The system assigns each respective file or credential to one of a set of protection classes, and assigns each protection class a class encryption key. The protection classes allow for certain file behavior and access rights, such as write-only access, read/write access, and read-only, no write access. The system encrypts the class encryption key based on a combination of one or more of a user passcode, a public encryption key and a unique device specific code.
In a second embodiment, the system verifies a password. A system practicing the method decrypts a key bag containing encryption keys with a user entered password. Each encryption key in the key bag is associated with a protection class on a device having file-level data protection. The system retrieves data from one or more encrypted files using a class encryption key from the decrypted key bag. Then the system verifies the entered password based on a comparison of the retrieved data with expected data.
In a third embodiment, the system generates a cryptographic key based on a device-specific identifier. A system practicing the method receives a user-entered passcode on a device and combines the passcode with a non-extractable secret associated with the device to yield a derived master key. The system generates the master key according to an encryption algorithm. Then the system encrypts content on the device with the derived key.
A mobile, stationary, or combination of multiple computing devices can practice the principles disclosed herein. Other applications and combinations of the principles disclosed herein also exist, for example protecting system data based on file-level data protection.
In order to describe the manner in which the above-recited and other advantages and features of the disclosure can be obtained, a more particular description of the principles briefly described above will be rendered by reference to specific embodiments thereof which are illustrated in the appended drawings. Understanding that these drawings depict only exemplary embodiments of the disclosure and are not therefore to be considered to be limiting of its scope, the principles herein are described and explained with additional specificity and detail through the use of the accompanying drawings in which:
Various embodiments of the disclosure are discussed in detail below. While specific implementations are discussed, it should be understood that this is done for illustration purposes only. A person skilled in the relevant art will recognize that other components and configurations may be used without parting from the spirit and scope of the disclosure.
The present disclosure addresses the need in the art for improved encryption approaches. The encryption approaches herein are based on a per-file and per-class encryption or data protection scheme. A brief introductory description with reference to these approaches will be provided, followed by a discussion of a basic, general-purpose system or computing device in
The data protection features disclosed herein can safeguard user data in the event of a stolen device. Current encryption schemes encrypt all data stored on a device with a single symmetric encryption key that is available when the system is running. Thus, if the device is cracked such that the attacker can run his own code on the device, the user's data is accessible to the attacker.
In one aspect, the approaches set forth herein rely on data encrypted with a secret known only to the user to protect the user's data, such as a passcode. As a result, if the user has enabled data protection but has not entered his passcode since a device reboot, his data will not be accessible to the system. However, this approach introduces a number of complications, mostly surrounding processes that access user data in the background, even while the device is locked, such as email and calendar information. Furthermore, this same set of data is necessary to properly backup, sync and potentially restore the user's data.
In one aspect where the system encrypts all new files on a file system, the data protection feature relies on every file on the data partition being individually encrypted with a unique symmetric encryption key. This encryption mechanism can supplant existing hardware encryption features by taking advantage of the hardware acceleration in the kernel without significant performance degradation. The system uses AES in direct memory access (DMA) so that a memory-to-memory encryption operation is not needed. However, the principles disclosed herein can be performed by a general purpose processor executing appropriate encryption instructions, a special purpose processor designed to perform encryption-based calculations, or a combination thereof.
The system can generate a random 256-bit AES key (or other size or type of key) to associate with a file when the file is created. An AES key is a cryptographic key used to perform encryption and decryption using the Advanced Encryption Standard algorithm. All input and output (I/O) operations performed on that file use that AES key so that the raw file data is only written to the file system in encrypted form. This individual file key accompanies the file as metadata, so that the file and key can be backed up and restored without having to access the file contents. The system can tell if a passcode is in compliance based on the metadata even when the passcode is not stored directly. This feature can be useful, for example, when testing passcode compliance with any local and/or server restrictions on the passcode strength such as an Exchange server password policy.
In one variation, the system defines a new mount option to be used for devices that support content encryption. This mount option instructs the kernel that all new files created on the partition should not be encrypted by default. This option can be used for system partitions, as those files do not need to be encrypted, as well as data partitions for older devices that do not support data protection.
When restoring backed up data to a device, a restore daemon can look for a new option in the device tree that indicates the device does not support data protection. In one implementation, the restore daemon is responsible for laying down the fstab file on the system partition at /private/etc/fstab. The fstab file can contain at least two entries. The first entry instructs the kernel to mount the system partition as a read only volume. The second entry instructs the kernel to mount the data partition at /private/var as a writable volume with the new data protection option. In another implementation, instead of using a new mount option that must explicitly be set in the fstab file, a Hierarchical File System (HFS) option is added in the header. The mounter auto detects that data protection should be turned on.
When a user enters a password, the system uses the entered password to derive a key which is used to decrypt the class keys. Alternatively, the system can derive a key from any user controlled source, such as a dongle. A dongle is a small piece of hardware that connects to a device. Each class key is wrapped with integrity, which allows the system to determine whether the unwrapping proceeded correctly. If the system unwraps all keys correctly, the system accepts the password. In one aspect, the system tries to decrypt all keys to maximize the time spent decrypting.
These and other variations shall be discussed herein as the various embodiments are set forth. The disclosure now turns to
With reference to
The system bus 110 may be any of several types of bus structures including a memory bus or memory controller, a peripheral bus, and a local bus using any of a variety of bus architectures. A basic input/output (BIOS) stored in ROM 140 or the like, may provide the basic routine that helps to transfer information between elements within the computing device 100, such as during start-up. The computing device 100 further includes storage devices 160 such as a hard disk drive, a magnetic disk drive, an optical disk drive, tape drive or the like. The storage device 160 can include software modules 162, 164, 166 for controlling the processor 120. Other hardware or software modules are contemplated. The storage device 160 is connected to the system bus 110 by a drive interface. The drives and the associated computer readable storage media provide nonvolatile storage of computer readable instructions, data structures, program modules and other data for the computing device 100. In one aspect, a hardware module that performs a particular function includes the software component stored in a tangible and/or intangible computer-readable medium in connection with the necessary hardware components, such as the processor 120, bus 110, output device 170, and so forth, to carry out the function. The basic components are known to those of skill in the art and appropriate variations are contemplated depending on the type of device, such as whether the computing device 100 is a small, handheld computing device, a desktop computer, or a computer server.
Although the exemplary embodiment described herein employs flash memory storage 160, it should be appreciated by those skilled in the art that other types of computer readable media which can store data that are accessible by a computer, such as a hard disk drive, magnetic cassettes, flash memory, digital versatile disks, cartridges, random access memories (RAMs) 150, read only memory (ROM) 140, a cable or wireless signal containing a bit stream and the like, may also be used in the exemplary operating environment. Tangible computer-readable storage media expressly exclude media such as energy, carrier signals, electromagnetic waves, and signals per se.
To enable user interaction with the computing device 100, an input device 190 represents any number of input mechanisms, such as a microphone for speech, a touch-sensitive screen for gesture or graphical input, keyboard, mouse, motion input, speech and so forth. An output device 170 can also be one or more of a number of output mechanisms known to those of skill in the art. In some instances, multimodal systems enable a user to provide multiple types of input to communicate with the computing device 100. The communications interface 180 generally governs and manages the user input and system output. There is no restriction on operating on any particular hardware arrangement and therefore the basic features here may easily be substituted for improved hardware or firmware arrangements as they are developed.
For clarity of explanation, the illustrative system embodiment is presented as including individual functional blocks including functional blocks labeled as a “processor” or processor 120. The functions these blocks represent may be provided through the use of either shared or dedicated hardware, including, but not limited to, hardware capable of executing software and hardware, such as a processor 120, that is purpose-built to operate as an equivalent to software executing on a general purpose processor. For example the functions of one or more processors presented in
The logical operations of the various embodiments are implemented as: 1) a sequence of computer implemented steps, operations, or procedures running on a programmable circuit within a general use computer, 2) a sequence of computer implemented steps, operations, or procedures running on a specific-use programmable circuit; and/or 3) interconnected machine modules or program engines within the programmable circuits. The system 100 shown in
Having disclosed an exemplary computing system, the disclosure now turns to a brief discussion of public-key cryptography. Public-key cryptography is a cryptographic approach that utilizes asymmetric key algorithms in addition to or in place of traditional symmetric key algorithms.
In public-key cryptography, a mathematically related key pair is generated, a private key and a public key. Although the keys are related, it is impractical to derive one key based on the other. The private key is kept secret and the public key is published. A sender encrypts a message with the receiver's public key 230, and the receiver of the message decrypts it with the private key 240. Only the receiver's private key can decrypt the encrypted message.
Having disclosed some basic encryption-related concepts and system components, the disclosure now turns to the exemplary method embodiment shown in
Once each file is encrypted with its own unique key, the system 100 can protect each one of those files with a secret known only to the user. When a file is created and individual file encryption key is generated, the system 100 can wrap that key with a class key. The unique file encryption key is metadata that the system 100 can store in the filesystem or which can exist in user space. The kernel can then cache the key during file access. By always encrypting a file and then wrapping its file key with a class key, the system 100 avoids the cost of encrypting every file already created when the user enables data protection. Instead, the system 100 simply encrypts the set of class keys, which is a bound and relatively inexpensive computational operation. With data protection enabled, if the user has not entered his passcode, then the class keys are not available. If the kernel cannot access the class keys, it cannot decrypt the individual file keys and the raw file data is inaccessible to the system. The efficacy of the feature now depends on how the class keys are managed.
When the device, such as a smartphone or personal computer, is locked, the system explicitly purges keys stored in memory as well as any data protected file contents stored in memory which should be inaccessible when the device is locked. For example, the system 100 can purge keys associated with protection classes A, B, C, but not class D when the device enters or is about to enter a locked state. The device can also purge or otherwise remove access to the contents of files stored in memory which are associated with classes A, B, C.
For example, protection classes can provide different functionality for different levels of authentication. The scenario set forth below illustrates one example application of protection classes. When a device that has data protection enabled first boots, the user has not yet entered his passcode. Thus none of the files are accessible because the class keys themselves are encrypted. Because the system relies on preference and configuration files that live on the data partition, the class keys must be decrypted before the files can be accessed. If those files cannot be read, then certain mission critical components are not able to boot to the point where the user can enter his passcode. One compromise is to separate the types of files that are accessible when the device has first booted from files that should only be accessible when the user has entered his passcode. The files can be separated into protection classes. Protection classes can include many aspects of policy for transformation, such as readability, writability, exportability, and so forth. Some classes are associated with specific user actions, such as generating new keys without erasing the entire device when a user changes his or her password, for example.
One example protection class, known as Class A, is a basic class for data protected files. When the device first boots, Class A files are not accessible until the user enters his passcode. When the device is locked, these files become inaccessible. Some applications and/or system services may need to adapt to Class A because they cannot access their files when the device is locked, even if the application or system service is running in the background.
Another example protection class, known as Class B, is a specialized class for data protected files that require write access even when the device is locked. When the device first boots, Class B files are not accessible until the user enters his passcode. When the device is locked, these files can only be written to and not read. One example use for Class B files is for content downloaded while the device is locked, such as email messages, text messages, cached updates, a cache mail database for messages downloaded while the device is locked, and so forth. When the device is later unlocked, such files can be read. For example, when the device is later unlocked, the cache mail database can be reconciled with the primary mail database.
Another example protection class, known as Class C, is a specialized class for data protected files that require read/write access even when the device is locked. For example, when the device first boots, these files are not accessible until the user enters his passcode. When the device is locked, these files are still accessible. Class C files can be used for databases that need to be accessible while the device is locked. Some other example uses for Class C include data that can always be read once the device has been unlocked once after boot, even if it locks again, such as contacts and a calendar.
Yet another example protection class, known as Class D, is a default class for data protected files. Class D files are accessible regardless of whether the user has entered his passcode.
While four classes are discussed in detail herein, the number of protection classes can be more or less, such as 2 protection classes, 10 protection classes, or more and can include protection classes granting different access rights and performing different sets of functionality than what is discussed herein. Several additional exemplary protection classes follow. For example, one protection class can be a specialized class for files that are tied to a single device using the UID or keys derived from the UID and cannot be migrated to a second device. A second exemplary protection class can be a specialized class associated with a specific application. A third exemplary protection class can generate new keys whenever an escape from previous escrow is needed without the need to erase the whole device, for example a password change. The system 100 can change a passcode for every generation of a system key back, especially when blastable storage contains a key that wraps system key bags, such that former weak key bags (the original that has an empty passcode) become inaccessible on a passcode change.
In one aspect, when the system 100 changes states, such as going from locked to unlocked or vice versa, the system 100 erases certain class keys from memory. For example, if the device has been locked, it can erase the Class A key from memory and treat Class B as read only.
As operating systems are upgraded to use updated sets of classes having new keys and/or entirely new classes, the system 100 can store a new Class A key, for example. In this example, the system 100 uses the new Class A key for newly created files, while the system retains the older Class A key for dealing with older files. This can provide a protection class aware migration path for updating class keys in the event that they are cracked or more efficient algorithms or hardware are developed.
With respect to keychain backup items, the system can consider two dimensions. The first consideration is classes Ak, Ck, Dk and the second consideration is whether or not the keychain item is protected with the device UID, and thus can not be transferred to other devices. If the keychain is protected with the device UID, it can only be restored to the same device it was backed up from. Those classes are known as Aku, Cku, Dku. The additional “u” state is used for backup protection to indicate if it can be transferred to a different device. For example, if “u”, than it can not be restored to a different device. The class (A, C, D) is used at runtime on the device the same, regardless of the “u” state. If the system includes additional classes, the second dimension (whether or not it is also wrapped with the UID and can or cannot be restored to a different device) would also apply to the additional key classes.
In one embodiment, the system 100 encrypts a credential keychain.
The system 100 encrypts the class encryption keys based on a combination of one or more of a user passcode, a public encryption key and a unique device specific code depending on the type of key bag in which the keys are stored. Key bags are a set of keys accessible to the system, such as an operating system kernel. In one variation, each key bag encrypts individual class keys in a unique way based, for example, on a unique combination of the user passcode, the public encryption key, and the unique device specific code. One key bag encrypts class keys based on the user passcode and the unique device specific code, another key bag encrypts class keys based just on the unique device specific code, and yet another key bag encrypts class keys based on all three, for example. The system 100 stores class encryption keys in key bags, such as a default key bag, a protected key bag, an escrow key bag, and a backup key bag.
In one embodiment, the key bags are accessible in user space, but their contents can only be accessed in kernel space by a special kernel extension. A daemon in user space can provide the kernel with the proper key bag and the information necessary to access its contents. Further, backup and sync components on the host generally need to coordinate with the device in order to make data accessible while the device is locked. This coordination can be handled by a lockdown service agent that proxies their requests to a management daemon, which in turn coordinates with the kernel extension.
The system 100 uses the different key bags for different purposes. For example, the backup key bag and/or the escrow key bag can be used in backing up a device or synchronizing devices. A default key bag can protect the device in its initial state before a user enables data protection such as by creating a passcode. In one aspect, the backup key bag is never kept on the device. The backup key bag is part of the backup, and is used to encrypt the files in the actual backup, not any files on the device. When restoring to a device, the backup key bag is sent over, so the restored files from the backup can be decrypted.
In another variation, the escrow key bag is kept on the device, but it can't be used by the device without a secret that is only kept on the backup host. The escrow key bag is used to access files/keychain items on the device so they can be backed up, even if the device is locked where they normally could not be accessed. The backup host can be a computer, another mobile device, a server or collection of servers, a web service, or just a drive. Such a drive, for example, can require some credential from the device to gain access, but once the device can access it, the backup is just stored on the drive which is not an active agent.
Regardless of the type of backup device type, the backed up files do not have to be encrypted with the same file keys as the ones on the device. In one embodiment, the backup device transfers the files as is (with the same encryption), and encrypts the file keys themselves with the backup key bag's class keys. In another embodiment, the files are actually transcrypted (converted from one encryption scheme or key to another encryption scheme or key) using a file key that is different and distinct from the file key used on the device.
More feedback on backup (related to my last note)—sorry I am sending multiple notes, I'm not reading with whole thing at once, and wanted to get you the feedback as soon as I read each section.
The protected key bag 720 contains all class encryption keys encrypted by the user key and the unique device specific code. The user key can be the same as the user passcode or can be derived from the user passcode. The user key is an encryption key based on the user passcode. When the user enables data protection, such as by creating a passcode, the system 100 converts the user's passcode into a derived secret that can be used to protect the protection class keys. A new key bag, the protected key bag, is generated that contains the protection class keys encrypted by the user key and the unique device specific code.
In one aspect, when a user locks his device or the device automatically places itself in a locked state, the system 100 can grant certain applications a grace period to finalize their data and write it to mass storage before enforcing the class encryption keys for a locked state. For example, if a user is composing an email on a mobile device and leaves mid-composition, the mobile device can automatically lock after a timeout duration. After the mobile device is locked, the system can grant the email application a grace period and/or notify the email application of the grace period duration so that the email application can save the half composed email as a draft despite the mobile device's locked state, for example.
A third key bag 730, the escrow key bag, contains all class encryption keys encrypted by the unique device specific code and a public key 210 relating to an asymmetric key pair. The system 100 utilizes the escrow key bag 730 during synchronization and/or backup operations. Lastly, the backup key bag 740 contains all class encryption keys encrypted by the public key. The system uses the backup key bag 740 during a backup event. It is important to note that the backup key bag 740 contains different class encryption keys than the default, protected and escrow key bags 710, 720, 730. In one variation, the backup and escrow key bags 730, 740 are protected by the public key generated by the device, not the user passcode. Because it may be impractical for a user to enter a passcode each time the system 100 performs a backup or synchronization, the system 100 can protect the protection class keys with a key that does not relate to the user passcode. The backup host can store the backup key bag.
Because the default key bag is not protected by a user passcode, the device is vulnerable to attack. For example, if an attacker steals the device and executes malicious computer code on it, he can access the device specific code and decrypt all class keys. Sensitive user data is no longer protected once an attacker decrypts the class keys because the attacker can decrypt all file encryption keys. The attacker can then decrypt files with the file encryption keys, accessing sensitive user information. As stated above, one initial state of the device is to protect class keys using the default key bag. When file-level data protection for the device is enabled, the system 100 uses the public encryption key to protect the protected, escrow and backup key bags.
In one aspect, each class key is randomly generated. In another aspect, class keys in the default key bag are wrapped with the device's unique device identifier (known as a UID or UDID), which is a unique code associated with the hardware of the device. The UID is only accessible when the device is running in a secure environment and cannot be used by any other device. It should be noted that if the device is cracked such that the attacker can control the kernel, he can decrypt items protected with the UID. This is why one aspect of this disclosure is to also protect key bags with a secret known only to the user.
Having discussed different protections for class encryption keys, the disclosure now turns to the issue of backing up data from a device having file-level data protection.
For security reasons, the original protection class keys never leave the device. Instead, this approach rewraps the individual files with a new set of class keys. When the host sends the backup ticket to the device in order to access the original device class keys in the escrow key bag, the system can establish a new set of backup class keys. For enterprise users or other uses, the system can provide an option to disallow the new set of class keys from being backed up. This can allow users to support a zero knowledge backup of the device to a host.
Once the system generates the backup key bag, the first device either automatically selects or selects based on user input a set of encrypted files to back up (1250). The system 100 decrypts the file encryption keys corresponding to the selected set of encrypted files. The system decrypts the file encryption keys with the corresponding decrypted protection class keys (1260) from the escrow key bag. The system 100 re-encrypts the file encryption keys corresponding to the selected set of encrypted files with the new protection class keys (1270). In one aspect, the system directly accesses encrypted data from the filesystem instead of decrypting and re-encrypting the file encryption keys.
Once the system 100 re-encrypts the file encryption keys, they are ready for transfer to the backup device. The first device transfers to the second device the selected set of encrypted files, the backup key bag and metadata associated with the selected set of encrypted files (1280), including the file encryption keys. It is important to note that the system stores the backup files along with the backup key bag, backup ticket and backup secret on the backup device. Since the backup secret decrypts the backup ticket, and the backup ticket decrypts the backup key bag, the class protection keys are accessible. If the class protection keys are accessible, then the backup file keys are accessible, and the backup files can be decrypted. Since the backup class protection keys differ from the class protection keys stored in the default, protected and escrow key bags on a device, an attacker that accesses backup keys can only decrypt backup files on the second device; he cannot access files on the first device. This approach can limit the potential avenues an attacker can take to compromise sensitive user data on a device.
Having disclosed backup initiation and the backup process on a system with file-level data protection, the disclosure now turns to restoring encrypted backup files to a device with file-level data protection. In one aspect, encrypted backup files can be restored to a device not capable or not configured to encrypt on a per-file and per-class basis. In this case, the restored backup files can retain their respective unique file keys and class keys which can be activated when the files are restored to a device capable of such encryption.
In one backup variation, the host connects on the device to establish a backup relationship. The host generates a backup secret. If the user has chosen to protect his backups with a password, the secret can be derived from this password. If not, the secret can be generated at random and stored on the host. The host sends this backup secret to the device. The device creates a host identity if one does not already exist, and provides it with the backup secret as well. The host constructs the backup ticket based on a host identity and/or the backup secret and transmits it to the device. Unlike a sync ticket, the two elements of the host identity are not encrypted with the device UID, but instead are encrypted with the backup secret. As a result, if the user has chosen to protect his backups with a password, any backup content associated with that backup ticket is essentially tied to the user's password. The host can store a key that can access files backed up from the device. This means that an attacker could access data from a device if he has stolen or compromised the host. In some systems, availability of secure storage mitigates this risk, but other systems, such as Microsoft Windows®, options for such secure storage are limited.
The disclosure now turns to a discussion of restoring a backup. The system 100 can restore a backup to the same device that was the original source for the backup data or to another device. In either case, the backup is based on the backup key bag. One example of this scenario is backing up a mobile phone to a desktop computer and restoring the backed up data to the mobile phone, such as after a system erase and reinstall. When the host wants to restore a backup to the device, it needs to do two things. First, the host unlocks the device class keys, and also provides the device with the backup class keys so that restored files can be re-wrapped with the device class keys. The host can provide the backup ticket and backup secret to unlock the escrow key bag as before. When the backup agent on the device restores a file from the host, it will need to rewrap the file encryption key with the original device class key. It receives the file's metadata from the host which includes the wrapped file key. The system unwraps the wrapped key and decrypts the file key using the appropriate backup class key, and then encrypts it with the appropriate device class key.
The backup agent then sets the metadata of the file with the rewrapped file key. If the backup agent is restoring files from multiple backup repositories, such as files that were backed up during an incremental backup, the host is responsible for sending the appropriate backup key bag to the device. In one aspect, the system can only load one backup key bag at a time. This requires a certain level of coordination between the backup component on the host and the agent on the device so that the rewrapping operation does not fail or result in a corrupted file key.
The disclosure now turns to a discussion of restoring a backup to a different device with the backup key bag. One example of this scenario is backing up a mobile device to a desktop computer and restoring the backed up data to a replacement device after the mobile device is lost, stolen, or destroyed. Restoring to a different device follows the exact same mechanism as restoring to the original device with one important distinction: files that are associated with a protection class based on a device-specific identifier or UID. Files associated with a UID are protected with the UID of the new device. One example of this is when a device enrolls with a Virtual Private Network (VPN) server, the device is granted credentials that were only intended for that device, and should not be allowed to migrate to another device, even in the event the original device was lost.
Having discussed the process of backing up a device with file-level data protection, the disclosure now turns to the issue of synchronizing devices with file-level data protection.
The first device decrypts protection class keys based on the sync ticket (2330). The system decrypts the sync ticket with the unique device specific code stored on the device and decrypts the protection class keys stored in the escrow key bag with the private key stored on the sync ticket. Once the system decrypts the protection class keys, the system can decrypt the file keys, and decrypt the files using the decrypted file keys. Once the system decrypts the files, the system can synchronize data with the second device (2340). This process allows for new keys created between sync events to be escrowed by storing the public key of the sync ticket on the device. Additionally, the synced device may revoke access by removing escrowed keys from the device.
Having discussed synchronizing data between devices having file-level data protection, the disclosure now turns to the issue of obliteration. Obliteration is used to destroy or remove access to data on a device. In one aspect, obliteration can include actually erasing data stored on a device. In another aspect, obliteration does not actually erase data stored on a device, but removes the means for decrypting encrypted data, thereby effectively erasing data stored on the device by removing access to the data in its usable clear form. In one implementation, a NAND flash layer includes an effaceable storage component which is utilized to guarantee a key is deleted from the system during obliteration or a password change. NAND flash is a type of non-volatile computer storage.
When the system creates a new default key bag, it generates a new set of protection class keys and stores them in the default key bag. After the system obliterates the device, the device does not contain sensitive user information or does not have any way of accessing, understanding, or decrypting sensitive user information. Obliteration can be useful when a device is refurbished for use by a different user.
In the variations discussed above, the device and host, whether backup host or synchronization host, store different key bags. In one suitable configuration, the various key bags are stored as follows: the device stores the backup key bag secret and the escrow key bag secret. The host stores the backup key bag and the escrow key bag. The host can optionally store the backup key bag secret.
Having discussed synchronizing data between devices having file-level data protection, the disclosure now turns to the issue of passcode verification. Typically, a device stores a user passcode or some derivation of a user passcode, for example a hash. A hash is a mathematical function that accepts a value as input and outputs a hash value. Often, the hash value is used as an array index. In the case when a device stores a passcode, the device compares an entered passcode with the stored passcode on the device. If the passcodes match, a user is granted access. In the case when a device stores a passcode hash, the device compares a hash of the entered passcode with the hash stored on the device. If the hash values match, the user is granted access. A device with file-level data protection does not store the passcode or any derivation of the passcode on the device. For password verification, the device checks an entered passcode by attempting to decrypt data encrypted with the passcode.
In one modification, the system performs garbage collection on keys to be deprecated. The system can perform the garbage collection by comparing a list of referenced counted class keys with a list of class keys used in the file system, and removing keys which are not referenced or otherwise used. The system can also gradually or incrementally transform wrapping keys when new keys are generated to protect new content.
The principles described herein can be applied in conjunction with other compatible encryption approaches.
One configuration to which any or all of the principles described above can be applied is content protection. Content protection can include any of a number of approaches to restrict reading and writing to protected content, such as media files, system files, folders, keychains, file systems, partitions, and individual blocks. In some cases, a first portion of a file can be content protected while the remaining portion is unprotected. One specific example usage scenario is a mobile device such as a smartphone. A mobile device can be easily lost or stolen, so a user can mitigate the risk to sensitive data stored on the mobile device by enabling content protection for the sensitive data, such as contacts, documents, calendar items, and so forth. Content can be protected based on a user password, for example.
However, one problem with content protection based on a user password is that user-entered passwords tend to be short. A password is only as secure as the amount of entropy, or uncertainty, the password has. For example, a four digit password has ten thousand possible combinations. A brute force attack which simply tries every possible combination of digits can easily discover such a short password and compromise the protected content, especially given the power, speed, and parallel processing available in modern computing devices. Longer passwords are desirable due to the increased entropy, but users have difficulty remembering extremely long passwords.
One exemplary solution presented herein to this problem includes at least two aspects. The first aspect is to combine a user password with a longer string, such as a secret, non-extractable, device-specific unique identifier. The second aspect is to produce a derived cryptographic key from the combined user password and the longer string through an iterative process that can only be performed on the device, such that any brute force attack must step through each iteration and is therefore slowed down. The system can then encrypt or otherwise protect content with the derived key.
In one example configuration, the device receives a user password. The user password can be a set of alphanumeric or other symbols, gestures, stylus input, biometric input, video input, image input, or any combination thereof. The device then combines the user password with a unique, non-extractable code specific to the device to produce a derived key. For example, the device can make a system call passing the password as an argument, and the system call returns the derived key based on the non-extractable device specific code. However, the device is unable to directly access the device-specific code in software. The device can then use that derived key to encrypt content on the device. This approach can increase the required time to brute force attack encrypted content on the device.
In one aspect, the device-specific key in the hardware is roughly the same strength as the key used to encrypt the data. For example, if the key used to encrypt the data is a 256 bit key, the device-specific key can be 256 bits, 128 bits, or 512 bits. The lengths of the keys can be widely disparate as well, such as a 1024 bit device-specific key and a 64 bit derived key for encrypting data. The device can use all or part of the device-specific key and all or part of the user password to generate the derived key. For example, the device can generate the derived key from the entire user password and the first 100 bits of the device-specific key. The derived key can be larger than, smaller than, or the same size as the combination of the user password and the device-specific key.
One algorithm which can be used to produce the derived key is the password-based key derivation function version 2, or PBKDF2. PBKDF2 takes a user password, a known value and the number of iterations to perform as input and produces a key. PBKDF2 in conjunction with the disclosed algorithm produces the master key used for content encryption.
These approaches can limit the speed that an attacker can brute force the user password based on a function of the hardware to which the password is tied. Because the key is based on the non-extractable device identifier, the maximum speed of any brute force attempt is limited to the speed of the device itself. In many scenarios, such as smartphones or other mobile devices, the relatively slow speed of the device severely limits the brute force speed. Typically mobile device processors offer limited performance characteristics to fit within a battery envelope, but other devices such as a set-top box can also practice these principles. A powerful server-class computing device can also practice the principles disclosed herein, but certain aspects may be modified to increase the complexity of the operations and maintain sufficiently limited performance for potential brute force attacks. For example, the unique identifier in a server-class computing device can be 4096 bits instead of 256, or the number of iterations can be 30,000,000 instead of 50,000. Another approach for use with more powerful computing devices, such as a desktop computer or a server, is to limit access to the device specific key to a less powerful processor which is separate from the main processor.
In one implementation, the device takes the password and a known random salt, and runs one round, or iteration, of PBKDF2 using HMAC-SHA1 (Hash-based Message Authentication Code-Secure Hash Algorithm 1) as the PRF (primitive recurse function). The number of rounds corresponds to the number of 16-byte blocks the algorithm produces.
A larger number of iterations will produce a more secure password in the sense that a brute force attack will require more time to complete. For example, an iteration value of 100,000 may correspond to a 100 millisecond delay, meaning that every turn in the brute force attack would require 100 milliseconds. Further, because the device-specific key is required, the speed of the device itself is a limiting factor and the brute force attack cannot easily be parallelized or sped up beyond the computing capacity of the device. This approach increases the difficulty of a brute force attack by several orders of magnitude. One of the only ways to speed up this attack is to physically disassemble and closely examine the chip in the device that stores the device-specific key. While PBKDF2 is discussed herein as one specific example, other suitable algorithms can be applied as well, for example a cryptographic hash function or any other key derivation function. Any algorithm may be used that makes the derived key dependent on a serialized operation using a device-specific or hardware-specific secret key which cannot be directly extracted in software on the device.
The device can then use the derived master key to encrypt data on the device. The device can encrypt data block by block in an independent manner as in electronic codebook (ECB) mode or in a doubly dependent chain of blocks as in cipher-block chaining mode (CBC). In ECB mode, data is divided into blocks and each block is encrypted independently of other blocks. In cipher-block chaining mode, the ciphertext (encrypted block) of a previous round is XOR'd with the plain text (unencrypted block) of a next round. The resulting block is then encrypted. Any encryption mode utilizing the derived master key is acceptable.
The disclosure now turns to the exemplary method embodiment shown in
The system 100 combines at least part of the user passcode with at least part of a non-extractable secret associated with the device to yield a derived key (3004). The secret can be unique. In one aspect, each such device has a unique secret from all other such devices. The system 100 can combine the passcode and secret according to an encryption algorithm such as PBKDF2. PBKDF2 can be modified to accept or used in conjunction with an algorithm that accepts the device-specific secret in addition to the passcode, the salt, and a number of iterations. PBKDF2 can produce a derived key that is longer or shorter than the sum of the passcode and the non-extractable secret. For example, a passcode of any length and a fixed size secret can always produce a fixed-length derived key, or the variable length passcode and the fixed-size secret can produce a variable length derived key.
The system 100 encrypts content on the device with the derived key (3006). The content can be stored on a volatile or non-volatiles storage medium, such as a hard drive, flash, memory, cache, and so forth. Individual blocks can be encrypted individually as in ECB mode or as part of a double dependency chain of encrypted blocks, as in CBC mode.
Other principles disclosed herein can be applied to the derived or combined key. For example, the derived key can be used as part of an encryption class key scheme, as part of one or more of a default key bag, a protected key bag, an escrow key bag, and a backup key bag, as part of a backup secret or backup ticket, as part of a file key, as part of a sync ticket, and so forth. In one aspect, content protected by derived keys on one device can be migrated to another device having a different unique device specific identifier, such as when a user upgrades to a newer smartphone model, or when a PDA is lost and synced data from the lost PDA is synced to a new PDA. The data to be migrated can be converted into a clear or unprotected version of the data by use of the original device-specific secret, if it is available, or by use of an escrow key bag or other similar approaches. Once the data is no longer protected, a new or replacement device on to which the data is being migrated can reprotect the data with its own device-specific secret.
One advantage of this approach is that user passwords can be short and still retain highly secure attributes. A brute force attack on the key derived from a user passcode in this way is algorithmically rate limited, and limited even for attackers with access to significant computing power, because the device-specific key cannot be easily extracted from the device, if at all.
One exemplary algorithm, illustrated in
A new “tangle with hardware” operation performs i iterations of the following step:
Kn+1=TANGLE_WITH_HARDWARE (Kn, i) where i is the desired number of iterations
For every 128 iterations, the system encrypts one 4096 byte page which starts with Kn as the input and produces Kn+1. The system fills a 4096 byte buffer with a pattern of repeating Kn where each 4 byte word in Kn is XORed with the logical AES block number within the page (in little endian byte order) and put in the to-be-encrypted buffer. This 4 kilobyte buffer is then AES encrypted in CBC mode using the built in hardware key. The IV used for the first CBC block is all 0. After encryption, the system XORs together Kn with each 32 byte regions in the 4 kilobyte buffer to obtain a single 32 byte result Kn+1.
The system continues repeating these steps until it reaches Kn where n=i/128. If i is not divisible by 128, the last page will only prefill and encrypt and XOR one 32 byte block per remaining iteration. The output is the AES key used to encrypt content in the system. The output AES key is dependent on both the user's password and the device secret. By choosing an appropriate iteration count, the system rate limits how fast an attacker can attempt to brute force the users password to find the resulting key.
A new “tangle with hardware” operation calculates U1 though Un, for a given index n using the formula set forth below:
The resulting key R is obtained by XORing together U1 though Un. These algorithms are exemplary. Other comparable algorithms can be used and may be modified for optimization, speed, security, size, or other considerations.
Embodiments within the scope of the present disclosure may also include tangible and/or non-transitory computer-readable storage media for carrying or having computer-executable instructions or data structures stored thereon. Such computer-readable storage media can be any available media that can be accessed by a general purpose or special purpose computer, including the functional design of any special purpose processor as discussed above. By way of example, and not limitation, such computer-readable media can include RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to carry or store desired program code means in the form of computer-executable instructions, data structures, or processor chip design. When information is transferred or provided over a network or another communications connection (either hardwired, wireless, or combination thereof) to a computer, the computer properly views the connection as a computer-readable medium. Thus, any such connection is properly termed a computer-readable medium. Combinations of the above should also be included within the scope of the computer-readable media.
Computer-executable instructions include, for example, instructions and data which cause a general purpose computer, special purpose computer, or special purpose processing device to perform a certain function or group of functions. Computer-executable instructions also include program modules that are executed by computers in stand-alone or network environments. Generally, program modules include routines, programs, components, data structures, objects, and the functions inherent in the design of special-purpose processors, etc. that perform particular tasks or implement particular abstract data types. Computer-executable instructions, associated data structures, and program modules represent examples of the program code means for executing steps of the methods disclosed herein. The particular sequence of such executable instructions or associated data structures represents examples of corresponding acts for implementing the functions described in such steps.
Those of skill in the art will appreciate that other embodiments of the disclosure may be practiced in network computing environments with many types of computer system configurations, including personal computers, hand-held devices, multi-processor systems, microprocessor-based or programmable consumer electronics, network PCs, minicomputers, mainframe computers, and the like. Embodiments may also be practiced in distributed computing environments where tasks are performed by local and remote processing devices that are linked (either by hardwired links, wireless links, or by a combination thereof) through a communications network. In a distributed computing environment, program modules may be located in both local and remote memory storage devices.
The various embodiments described above are provided by way of illustration only and should not be construed to limit the scope of the disclosure. The principles herein primarily discuss mobile devices, but can be equally applied to any computing device. For example, a portable mass storage device can apply any or all of these approaches via its controller board when it interfaces with a laptop or desktop computer. Those skilled in the art will readily recognize various modifications and changes that may be made to the principles described herein without following the example embodiments and applications illustrated and described herein, and without departing from the spirit and scope of the disclosure.
This application is a continuation of U.S. patent application Ser. No. 15/010,858, filed Jan. 29, 2016, entitled “SYSTEM AND METHOD FOR CONTENT PROTECTION BASED ON A COMBINATION OF A USER PIN AND A DEVICE SPECIFIC UNIQUE IDENTIFIER”, now U.S. Pat. No. 9,912,476 issued Mar. 6, 2018, which is a continuation of U.S. patent application Ser. No. 14/299,375, filed Jun. 9, 2014, entitled “SYSTEM AND METHOD FOR CONTENT PROTECTION BASED ON A COMBINATION OF A USER PIN AND A DEVICE SPECIFIC UNIQUE IDENTIFIER”, now U.S. Pat. No. 9,288,047 issued Mar. 15, 2016, which is a continuation of U.S. patent application Ser. No. 12/797,587, filed Jun. 9, 2010, entitled “SYSTEM AND METHOD FOR CONTENT PROTECTION BASED ON A COMBINATION OF A USER PIN AND A DEVICE SPECIFIC UNIQUE IDENTIFIER,” now U.S. Pat. No. 8,788,842 issued Jul. 22, 2014, which is a continuation-in-part (CIP) of U.S. patent application Ser. No. 12/756,153, filed on Apr. 7, 2010, entitled “SYSTEM AND METHOD FOR FILE-LEVEL DATA PROTECTION,” now U.S. Pat. No. 8,510,552 issued Aug. 13, 2013, each of which are herein incorporated by reference in their entirety.
Number | Name | Date | Kind |
---|---|---|---|
5412723 | Canetti et al. | May 1995 | A |
5699428 | McDonnal et al. | Dec 1997 | A |
5787169 | Eldridge et al. | Jul 1998 | A |
5787175 | Carter | Jul 1998 | A |
5870477 | Sasaki et al. | Feb 1999 | A |
5953419 | Lohstroh et al. | Sep 1999 | A |
6185304 | Coppersmith et al. | Feb 2001 | B1 |
6249866 | Brundrett et al. | Jun 2001 | B1 |
6367010 | Venkatram et al. | Apr 2002 | B1 |
6389555 | Purcell et al. | May 2002 | B2 |
6560337 | Peyravian et al. | May 2003 | B1 |
6658566 | Hazard | Dec 2003 | B1 |
6735313 | Bleichenbacher et al. | May 2004 | B1 |
6857076 | Klein | Feb 2005 | B1 |
6889210 | Vainstein | May 2005 | B1 |
6981138 | Douceur et al. | Dec 2005 | B2 |
6985583 | Brainard et al. | Jan 2006 | B1 |
7047426 | Andrews et al. | May 2006 | B1 |
7178021 | Hanna et al. | Feb 2007 | B1 |
7197638 | Grawrock et al. | Mar 2007 | B1 |
7350081 | Best | Mar 2008 | B1 |
7515717 | Doyle et al. | Apr 2009 | B2 |
7536355 | Barr et al. | May 2009 | B2 |
7596696 | Perlman | Sep 2009 | B1 |
7703140 | Nath et al. | Apr 2010 | B2 |
7921284 | Kinghom et al. | Apr 2011 | B1 |
8045714 | Cross et al. | Oct 2011 | B2 |
8130963 | Deaver et al. | Mar 2012 | B2 |
8181028 | Hernacki et al. | May 2012 | B1 |
8200964 | Perlman et al. | Jun 2012 | B2 |
8254571 | Boyen | Aug 2012 | B1 |
8327138 | Nath et al. | Dec 2012 | B2 |
8412934 | De Atley et al. | Apr 2013 | B2 |
8433901 | De Atley et al. | Apr 2013 | B2 |
8510552 | De Atley et al. | Aug 2013 | B2 |
8589680 | De Atley et al. | Nov 2013 | B2 |
8756419 | De Atley et al. | Jun 2014 | B2 |
8788842 | Brouwer et al. | Jul 2014 | B2 |
8798272 | Cross et al. | Aug 2014 | B2 |
8826023 | Harmer | Sep 2014 | B1 |
8995665 | Tsaur et al. | Mar 2015 | B1 |
9237016 | De Atley et al. | Jan 2016 | B2 |
9288047 | Brouwer et al. | Mar 2016 | B2 |
9912476 | Brouwer et al. | Mar 2018 | B2 |
20010002487 | Grawrock et al. | May 2001 | A1 |
20010021255 | Ishibashi | Sep 2001 | A1 |
20010047341 | Thoone et al. | Nov 2001 | A1 |
20010056541 | Matsuzaki et al. | Dec 2001 | A1 |
20020016912 | Johnson | Feb 2002 | A1 |
20020019935 | Andrew et al. | Feb 2002 | A1 |
20020023215 | Wang et al. | Feb 2002 | A1 |
20020023232 | Serani et al. | Feb 2002 | A1 |
20020071563 | Kum et al. | Jun 2002 | A1 |
20020107877 | Whiting et al. | Aug 2002 | A1 |
20020138722 | Douceur et al. | Sep 2002 | A1 |
20020138750 | Gibbs et al. | Sep 2002 | A1 |
20020141588 | Rollins | Oct 2002 | A1 |
20030028592 | Ooho et al. | Feb 2003 | A1 |
20030088783 | DiPierro | May 2003 | A1 |
20030097596 | Muratov et al. | May 2003 | A1 |
20030108204 | Audebert et al. | Jun 2003 | A1 |
20030126434 | Lim et al. | Jul 2003 | A1 |
20030167395 | Chang et al. | Sep 2003 | A1 |
20030177401 | Arnold et al. | Sep 2003 | A1 |
20030198351 | Foster et al. | Oct 2003 | A1 |
20030210791 | Binder | Nov 2003 | A1 |
20030229782 | Bible, Jr. et al. | Dec 2003 | A1 |
20040088592 | Rizzo | May 2004 | A1 |
20040091114 | Carter et al. | May 2004 | A1 |
20040123127 | Teicher et al. | Jun 2004 | A1 |
20040146163 | Asokan et al. | Jul 2004 | A1 |
20040187012 | Kohiyama et al. | Sep 2004 | A1 |
20040204003 | Soerensen | Oct 2004 | A1 |
20040236776 | Peace | Nov 2004 | A1 |
20040236958 | Teicher et al. | Nov 2004 | A1 |
20040260923 | Nakai et al. | Dec 2004 | A1 |
20050071275 | Vainstein et al. | Mar 2005 | A1 |
20050071658 | Nath et al. | Mar 2005 | A1 |
20050081041 | Hwang | Apr 2005 | A1 |
20050097509 | Rong | May 2005 | A1 |
20050114686 | Ball et al. | May 2005 | A1 |
20050138360 | Kamalakantha | Jun 2005 | A1 |
20050157880 | Kum et al. | Jul 2005 | A1 |
20050172123 | Carpentier et al. | Aug 2005 | A1 |
20050182952 | Shinozaki | Aug 2005 | A1 |
20050191988 | Thornton et al. | Sep 2005 | A1 |
20050193198 | Livowsky | Sep 2005 | A1 |
20050228994 | Kasai et al. | Oct 2005 | A1 |
20050235143 | Kelly | Oct 2005 | A1 |
20050235148 | Scheidt | Oct 2005 | A1 |
20050244001 | Kitani et al. | Nov 2005 | A1 |
20050251866 | Kobayashi et al. | Nov 2005 | A1 |
20050268107 | Harris et al. | Dec 2005 | A1 |
20050289347 | Ovadia | Dec 2005 | A1 |
20060015745 | Sukigara et al. | Jan 2006 | A1 |
20060021007 | Rensin et al. | Jan 2006 | A1 |
20060021059 | Brown et al. | Jan 2006 | A1 |
20060059344 | Mononen | Mar 2006 | A1 |
20060062384 | Dondeti | Mar 2006 | A1 |
20060090082 | Apostolopoulos | Apr 2006 | A1 |
20060093150 | Reddy et al. | May 2006 | A1 |
20060104449 | Akkermans et al. | May 2006 | A1 |
20060120520 | Suzuki | Jun 2006 | A1 |
20060149962 | Fountain et al. | Jul 2006 | A1 |
20060159260 | Pereira | Jul 2006 | A1 |
20060179309 | Cross et al. | Aug 2006 | A1 |
20060195692 | Kuhlman et al. | Aug 2006 | A1 |
20060291660 | Gehrmann et al. | Dec 2006 | A1 |
20070005974 | Kudou | Jan 2007 | A1 |
20070016958 | Bodepudi et al. | Jan 2007 | A1 |
20070038857 | Gosnell | Feb 2007 | A1 |
20070058807 | Marsh | Mar 2007 | A1 |
20070074037 | Eckleder | Mar 2007 | A1 |
20070083785 | Sutardja | Apr 2007 | A1 |
20070086586 | Jakubowski et al. | Apr 2007 | A1 |
20070100913 | Sumner et al. | May 2007 | A1 |
20070106903 | Scheidt | May 2007 | A1 |
20070116286 | Yuan et al. | May 2007 | A1 |
20070116287 | Rasizade et al. | May 2007 | A1 |
20070116288 | Rasizade et al. | May 2007 | A1 |
20070124321 | Szydlo | May 2007 | A1 |
20070189540 | Tarkkala | Aug 2007 | A1 |
20070195957 | Arulambalam | Aug 2007 | A1 |
20070203957 | Desai et al. | Aug 2007 | A1 |
20070214370 | Sato et al. | Sep 2007 | A1 |
20070281664 | Kaneko et al. | Dec 2007 | A1 |
20070294529 | Blair et al. | Dec 2007 | A1 |
20080010455 | Holtzman et al. | Jan 2008 | A1 |
20080034205 | Alain et al. | Feb 2008 | A1 |
20080034224 | Ferren et al. | Feb 2008 | A1 |
20080052541 | Ginter et al. | Feb 2008 | A1 |
20080065909 | Chen | Mar 2008 | A1 |
20080092239 | Sitrick et al. | Apr 2008 | A1 |
20080107262 | Helfman et al. | May 2008 | A1 |
20080123843 | Machani | May 2008 | A1 |
20080123858 | Perlman et al. | May 2008 | A1 |
20080130893 | Ibrahim et al. | Jun 2008 | A1 |
20080137861 | Lindmo et al. | Jun 2008 | A1 |
20080181412 | Acar et al. | Jul 2008 | A1 |
20080189549 | Hughes | Aug 2008 | A1 |
20080219453 | Chang et al. | Sep 2008 | A1 |
20080226082 | Brunet et al. | Sep 2008 | A1 |
20080228821 | Mick et al. | Sep 2008 | A1 |
20080235772 | Janzen | Sep 2008 | A1 |
20080260159 | Osaki | Oct 2008 | A1 |
20080310633 | Brown et al. | Dec 2008 | A1 |
20090019293 | Perlman | Jan 2009 | A1 |
20090067633 | Dawson et al. | Mar 2009 | A1 |
20090075630 | McLean | Mar 2009 | A1 |
20090086964 | Agrawal | Apr 2009 | A1 |
20090092252 | Noll et al. | Apr 2009 | A1 |
20090138948 | Calamera | May 2009 | A1 |
20090217056 | Malpani | Aug 2009 | A1 |
20090259854 | Cox et al. | Oct 2009 | A1 |
20100014676 | McCarthy et al. | Jan 2010 | A1 |
20100037069 | Deierling et al. | Feb 2010 | A1 |
20100040231 | Jin et al. | Feb 2010 | A1 |
20100058055 | Hair | Mar 2010 | A1 |
20100058067 | Schneider | Mar 2010 | A1 |
20100093308 | Cohan | Apr 2010 | A1 |
20100098249 | Shin | Apr 2010 | A1 |
20100131775 | Jogand-Coulomb et al. | May 2010 | A1 |
20100162000 | Masui | Jun 2010 | A1 |
20100169670 | Sip | Jul 2010 | A1 |
20100199088 | Nath et al. | Aug 2010 | A1 |
20100208888 | Weber | Aug 2010 | A1 |
20100228989 | Neystadt et al. | Sep 2010 | A1 |
20100263056 | Schull | Oct 2010 | A1 |
20100268948 | Matsukawa et al. | Oct 2010 | A1 |
20100325051 | Etchegoyen | Dec 2010 | A1 |
20100325423 | Etchegoyen | Dec 2010 | A1 |
20110191837 | Guajardo Merchan et al. | Aug 2011 | A1 |
20110208977 | Roberts et al. | Aug 2011 | A1 |
20120066738 | Cohan | Mar 2012 | A1 |
20120137130 | Vainstein et al. | May 2012 | A1 |
20120328149 | Chen et al. | Dec 2012 | A1 |
20130034229 | Sauerwald et al. | Feb 2013 | A1 |
20130208893 | Shablygin et al. | Aug 2013 | A1 |
20130290734 | Branton et al. | Oct 2013 | A1 |
20140095869 | Oltmans et al. | Apr 2014 | A1 |
20160202998 | De Atley et al. | Jul 2016 | A1 |
Entry |
---|
Kaliski; “PKCS #5 v2.0: Password-Based Cryptography Standard,” Mar. 25, 1999, RSA Laboratories, Version 2.0, pp. 1-30. |
Menezes, Alfred J. et al, “Handbook of Applied Cryptography,” 1997, CRC Press LLC, pp. 228-231. |
Number | Date | Country | |
---|---|---|---|
20180241556 A1 | Aug 2018 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 15010858 | Jan 2016 | US |
Child | 15884200 | US | |
Parent | 14299375 | Jun 2014 | US |
Child | 15010858 | US | |
Parent | 12797587 | Jun 2010 | US |
Child | 14299375 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 12756153 | Apr 2010 | US |
Child | 12797587 | US |