Risk assessment for managed client devices

Information

  • Patent Grant
  • 12124586
  • Patent Number
    12,124,586
  • Date Filed
    Thursday, September 23, 2021
    3 years ago
  • Date Issued
    Tuesday, October 22, 2024
    15 days ago
Abstract
Examples of managed device risk assessment are described. In one example, a copy of an application installed on a client device is decompiled, to identify operations performed during execution of the application. A profile including one or more rules that specify whether the operations are assigned higher or lower levels of risk is obtained. A first number of times that the first rule is violated by the operations is determined, and a second number of times that the second rule is violated by the operations is determined. A total of the violations is compared against a threshold, and a remedial action is initiated in response to determining that the total exceeds the threshold.
Description
BACKGROUND

Client devices, such as smartphones, tablet computers, and the like, may execute applications that perform various functions. The applications may be obtained from a repository where the applications are stored and distributed for several client devices. Application developers may periodically update their applications and provide the updated versions of the applications to the repository for distribution to the client devices.





BRIEF DESCRIPTION OF THE DRAWINGS

Many aspects of the present disclosure can be better understood with reference to the following drawings. The components in the drawings are not necessarily to scale, with emphasis instead being placed upon clearly illustrating the principles of the disclosure. Moreover, in the drawings, like reference numerals designate corresponding parts throughout the several views.



FIG. 1 is a drawing of a networked environment according to various embodiments of the present disclosure.



FIGS. 2-4 are drawings of user interfaces that may be encoded and rendered by a computing environment in the networked environment of FIG. 1 according to various embodiments of the present disclosure.



FIGS. 5-6 are flowcharts illustrating examples of functionality implemented as portions of a device management system executed by the computing environment in the networked environment of FIG. 1 according to various embodiments of the present disclosure.



FIG. 7 is a schematic block diagram that illustrates an example of the computing environment in the networked environment of FIG. 1 according to various embodiments of the present disclosure.





DETAILED DESCRIPTION

The present disclosure is directed towards systems and methods for assessing the risks of applications that may be installed on one or more devices. In one embodiment, a device management system identifies an application that has already been installed on a client device that is managed by the device management system. The device management system then obtains a compiled version of the application from a third party public application repository and decompiles the application to generate assembly code or intermediary code. The assembly or intermediary code is analyzed to identify operations that may be performed by the application.


The device management system may then identify a usage category for the application and obtain a policy specification that has been assigned to the usage category. For example, a usage category for a particular application may be “email client,” indicating that the application is intended to send and receive email, and a policy specification for the “email client” usage category may specify that “email client” applications should not access the global positioning system (GPS) of the client device. For such an example, the device management system may count how many times the generated assembly or intermediary code for the application represents an operation that accesses the GPS. If the number of operations that access the GPS exceeds a predefined threshold, the device management system may initiate a remedial action. For example, the device management system may cause the application to be uninstalled from the client device, or the device management system may alert a user that the application is a potential security risk. Additionally, the device management system may encode and render one or more reports that present various information regarding the analysis of the application. The one or more reports may be provided to an administrator of the device management system so that the administrator may decide whether to, for example, prohibit the application from being installed in the client devices that are managed by the device management system.


In the following discussion a general description of a non-limiting representative system and its components is provided, followed by a discussion of the operation of the system.


With reference to FIG. 1, shown is a networked environment 100 according to various embodiments. The networked environment 100 includes a computing environment 103, a client device 106, a public application distribution environment 109, and potentially other components, which are in data communication with each other over a network 113. The network 113 includes, for example, the Internet, one or more intranets, extranets, wide area networks (WANs), local area networks (LANs), wired networks, wireless networks, other suitable networks, or any combination of two or more such networks. Such networks may comprise satellite networks, cable networks, Ethernet networks, telephony networks, and/or other types of suitable networks.


The computing environment 103 may comprise, for example, a server computer or any other system providing computing capability. Alternatively, the computing environment 103 may employ multiple computing devices that may be arranged, for example, in one or more server banks, computer banks, or other arrangements. Such computing devices may be located in a single installation or may be distributed among many different geographical locations. For example, the computing environment 103 may include multiple computing devices that together form a hosted computing resource, a grid computing resource, and/or any other distributed computing arrangement. In some cases, the computing environment 103 may operate as at least a portion of an elastic computing resource where the allotted capacity of processing, network, storage, or other computing-related resources may vary over time. The computing environment 103 may also include or be operated as one or more virtualized server instances that are created in order to execute the functionality that is described herein.


Various systems and/or other functionality may be executed in the computing environment 103 according to various embodiments. Also, various data is stored in a data store 116 that is accessible to the computing environment 103. The data store 116 may be representative of multiple data stores 116. The data stored in the data store 116 is associated with the operation of the various systems and/or functional entities described below.


A device management system 119 and/or other systems may be executed in the computing environment 103. The device management system 119 may be executed to manage and/or oversee the operation of multiple client devices 106. For example, an enterprise, such as a company, may operate the device management system 119 to ensure that the client devices 106 of its employees, contractors, customers, etc. are operating in compliance with specified compliance rules. By ensuring that the client devices 106 are operated in compliance with the compliance rules, the enterprise may control and protect access to its computing resources and increase the security of the computing environment 103.


The device management system 119 may provide a management console 123, an application scanning engine 126, and/or other components. The management console 123 may facilitate an administrator controlling and interacting with the device management system 119. For example, the management console 123 may generate one or more user interfaces that are rendered on a display device. Such user interfaces may facilitate entering commands or other information for the device management system 119. Additionally, the user interfaces may render presentations of statistics or other information regarding the client devices 106 that are managed by the device management system 119.


The application scanning engine 126 may be executed to analyze applications 129 that may be installed in one or more of the client devices 106. To this end, the application scanning engine 126 may include a decompiler 133, a code analyzer 136, and/or other components. The decompiler 133 may obtain a compiled application 129 and decompile the compiled application 129 to generate assembly and/or intermediary code. Such assembly and/or intermediary code may include human-readable text that represents operations that may be performed when the application 129 is executed in a client device 106. The code analyzer 136 may be executed to analyze the assembly and/or intermediary code in order to identify the particular operations that may be performed when an application 129 is executed in the client device 106.


Although the application scanning engine 126 is shown in FIG. 1 as being executed in the computing environment 103, in alternative embodiments, the application scanning engine 126 may be executed in the client device 106. In such embodiments, the results of the application scanning engine 126 may be transmitted from the client device 106 to the device management system 119 and used by the device management system 119 as described herein.


In other embodiments, the application scanning engine 126 may be operated as a service by a third party provider. In these embodiments, the device management system 119 and the application scanning engine 126 may communicate by using an application programming interface (API) or other communication protocol over the network 113. In these embodiments, the application scanning engine 126 may analyze the assembly, intermediary, and/or object code for an application 129, and the results of the analysis may be transmitted over the network 113 to the device management system 119.


The data stored in the data store 116 may include client device data 135, private application data 137, one or more profiles 139, and/or other information. The client device data 135 may include information regarding the client devices 106 that are managed and/or controlled by the device management system 119. Such client device data 135 for a particular client device 106 may include, for example, the identification of the particular applications 129 that are installed in the client device 106, historical data regarding the operation of the client device 106, and/or other information.


The private application data 137 may represent applications 129 that may be distributed through the device management system 119 through, for example, a private application repository that is executed by the computing environment 103. The private application repository may store and distribute applications 129 to only the client devices 106 that are managed by the device management system 119. In some embodiments, an application 129 that is represented in the private application data 137 may be an application 129 that has been previously processed by the application scanning engine 126 and determined as being a low security risk to the client devices 106 and/or the computing environment 103. In other embodiments, the private application data 137 may represent an application 129 that was developed by or for the entity that operates or uses the computing environment 103. Such an application 129 may be referred to as an “in-house” application 129.


A profile 139 may comprise a set of one or more rules 143. Each rule 143 may specify whether an operation is permitted to be performed by an application 129 in a client device 106. Non-limiting examples of rules 143 include whether an application 129 is permitted to read and/or write data, such as calendar data, location data (e.g., GPS data), user contact data (e.g., names, phone numbers, etc.), messages (e.g., short message service (SMS) messages, email messages, etc.), files, history data (e.g., browser history, email history, rendered multimedia history, etc.), and/or any other information. As additional non-limiting examples, one or more rules 143 may specify whether an application 129 is permitted to enable, disable, and/or check the status of a component for the client device 106, such as a camera, a network interface (e.g., a wired or wireless Ethernet interface, a cellular network interface, a BLUETOOTH interface, etc.), and/or any other component associated with the client device 106. Furthermore, some rules 143 may specify whether an application 129 is permitted to communicate with one or more particular devices, Internet Protocol (IP) addresses, network sites (e.g., web sites), phone numbers, email addresses, etc.


Each profile 139 may be assigned to one or more usage categories 146, which are described in further detail below. For example, a profile 139 assigned to the “navigation” usage category 146 may have a first rule 143 specifying that an application 129 is not permitted to access phone call logs and a second rule 143 specifying that the application 129 is permitted to access, enable, disable, and check the status of a GPS. By contrast, a profile 139 assigned to the “social networking” usage category 146 may have a first rule 143 specifying that an application 129 is permitted to access phone logs and a second rule 143 specifying that the application 129 is not permitted to access, enable, disable, and check the status of a GPS. Additionally, there may be a profile 139 that is assigned to all usage categories 146. Such a profile 139 may be referred to as a “global” profile 139.


Each rule 143 may be assigned a level of risk. A level of risk may indicate the degree to which the device management system 119, a client device 106, and/or any other device may be exposed to a security breach if the rule 143 were to be violated. For example, a rule 143 that prohibits an application 129 from communicating with a known malicious device may be assigned a relatively high level of risk. By contrast, a rule 143 that prohibits an application 129 from checking the status of a GPS may be assigned a relatively low level of risk. Thus, a profile 139 may have multiple sets of rules 143 that are assigned respective levels of risk.


The client device 106 is representative of multiple client devices 106 that may be coupled to the network 113. The client device 106 may comprise, for example, a processor-based system such as a computer system. Such a computer system may be embodied in the form of a desktop computer, a laptop computer, a personal digital assistant, a mobile phone (e.g., a “smartphone”), a set-top box, a music player, a web pad, a tablet computer system, a game console, an electronic book reader, or any other device with like capability. The client device 106 may include a display that comprises, for example, one or more devices such as liquid crystal display (LCD) displays, gas plasma-based flat panel displays, organic light emitting diode (OLED) displays, LCD projectors, or other types of display devices.


The client device 106 may be configured to execute one or more applications 129, a management component 149, and/or other components. An application 129 may comprise, for example, one or more programs that perform various operations when executed in the client device 106. Such an operation may comprise, for example, storing data, reading data, controlling a component for the client device 106, seeking to obtain authorization to access a resource and/or perform functionality, causing other applications 129 and/or components to perform functionality, and/or other functionality. An application 129 may perform some operations by initiating functions that are handled by an operating system in the client device 106. An application 129 may initiate operating system functions by, for example, performing API calls for the operating system.


One or more usage categories 146 may be associated with each application 129. A usage category 146 may, for example, indicate the intended use for an application 129. For example, a particular application 129 may be associated with the usage category “music,” which indicates that the application 129 may be used to process audio. As another non-limiting example, a usage category 146 for an application 129 may be “photography,” indicating that the application 129 may be used to generate and/or render photographs and/or videos.


The management component 149 may be executed on the client device 106 to oversee, monitor, and/or manage at least a portion of the resources for the client device 106. The management component 149 may include a mobile device management service that operates in conjunction with an operating system for the client device 106. Additionally, the management component 149 may include an agent that operates in the application layer of the client device 106 and that monitors at least some of the activity being performed in the client device 106. Furthermore, the management component 149 may include an application wrapper that interfaces with a software component to facilitate overseeing, monitoring, and/or managing one or more resources of the client device 106. Additionally, the management component 149 may be a portion of an application 129 that was developed, for example, using a Software Development Kit (SDK) that facilitates implementing functionality that oversees, monitors, and/or manages at least a portion of the resources for the client device 106. The management component 149 may be executed by the client device 106 automatically upon startup of the client device 106. Additionally, the management component 149 may run as a background process in the client device 106. As such, the management component 149 may execute and/or run without user intervention. Additionally, the management component 149 may communicate with the device management system 119 in order to facilitate the device management system 119 managing the client device 106.


The public application distribution environment 109 may comprise, for example, a server computer or any other system providing computing capability. Alternatively, the public application distribution environment 109 may employ multiple computing devices that may be arranged, for example, in one or more server banks, computer banks, or other arrangements. Such computing devices may be located in a single installation or may be distributed among many different geographical locations. The public application distribution environment 109 may be operated by a third party relative to the one or more entities that operate the computing environment 103.


The public application distribution environment 109 may provide a public application repository 153 that stores public application data 156. The public application data 156 may comprise data representing several applications 129 that are made available for distribution to client devices 106 that are managed by the device management system 119 as well as other client devices 106 that are not managed by the device management system 119. The public application data 156 may also include information that is associated with these applications 129, such as data that represents the usage categories 146 for the applications 129. The public application repository 153 may also distribute updates for the applications 129 represented in the public application data 156 as well as one or more operating systems for the client devices 106.


Next, a general description of the operation of the various components of the networked environment 100 is provided. To begin, the client device 106 is powered up, and the management component 149 begins executing in the client device 106. As previously mentioned, the management component 149 may execute automatically and be run as a background process whenever the client device 106 is powered on.


As part of the initiation process for the management component 149, the management component 149 may identify the applications 129 that are installed in the client device 106. In some embodiments, the management component 149 may identify the installed applications 129 from time to time, such as upon the expiration of a timer, in response to receiving a request from the device management system 119, and/or in response to any other triggering event. After the installed applications 129 have been identified, the management component 149 may cause the client device 106 to transmit a list of the applications 129 installed in the client device 106 to the device management system 119.


After receiving the list of applications 129 that are installed in the client device 106, the device management system 119 may parse the list and determine whether any of the applications 129 in the list have not yet been processed by the application scanning engine 126. To this end, the device management system 119 may compare the name of each application 129 in the list of applications 129 installed in the client device 106 to a list of names of applications 129 that have been previously processed by the application scanning engine 126. For each application 129 that has not yet been processed by the application scanning engine 126, the device management system 119 may process the application 129, as will now be described.


First, the device management system 119 may obtain a copy of the compiled application 129 from the public application repository 153 or from any other source. The compiled application 129 may comprise object code and/or other information. After obtaining the compiled application 129, the device management system 119 may provide the application scanning engine 126 with the object code for the application 129. The decompiler 133 of the application scanning engine 126 may then decompile the object code to generate assembly and/or intermediary code for the application 129. After the assembly and/or intermediary code is generated by the decompiler 133, the assembly and/or intermediary code may be provided to the code analyzer 136, which may parse the assembly and/or intermediary code to identify the operations that are represented in the code. Further description regarding decompiling compiled applications 129 and analyzing assembly code to identify operations is provided in application Ser. No. 14/498,486, titled: “Fast and Accurate Identification of Message-Based API Calls in Application Binaries” and filed on Sep. 26, 2014, and patented as U.S. Pat. No. 9,280,665, which is incorporated by reference herein in its entirety.


In alternative embodiments, the device management system 119 may obtain the source code for an application 129 from a developer of the application 129 or from another source. In these embodiments, the process of decompiling the compiled application 129 to generate the assembly and/or intermediary code may be omitted, and the obtained source code may be provided to the code analyzer 136 for processing.


In some embodiments, an application 129 may be executed in a client device 106, and the resulting functionality performed in the client device 106 may be observed, recorded, and identified. Such operations may include reading and/or writing data, accessing a resource (e.g., data, a hardware component, or a software component), requesting authorization to access a resource, and/or other operations. The operations that are performed may be a result of the application 129 of interest and/or other applications 129 being executed. Thus, this embodiment of observing the operations being performed in the client device 106 may be used to identify operations that have been executed by one or more other applications 129 in response to being called by the application 129 of interest. In alternative embodiments, an application 129 may be simulated, and the simulated operations may be observed, recorded, and identified.


The device management system 119 may also identify one or more usage categories 146 that are associated with the application 129. In some embodiments, the public application repository 153 may store data representing the usage categories 146 for the applications 129 that it distributes. In these embodiments, the device management system 119 may identify the one or more usage categories 146 by retrieving this information from the public application repository 153.


In other embodiments, the device management system 119 may identify the one or more usage categories 146 in various ways. For instance, a usage category 146 may be identified by facilitating an administrator of the device management system 119 and/or a user of the client device 106 inputting data that specifies the usage category 146. Alternatively, the device management system 119 may detect that the name and/or metadata for an application 129 is indicative of a usage category 146. To this end, the data store 116 may include data that represents lists of words that have been associated with respective usage categories 146. If the name and/or metadata for an application 129 includes one or more of the words that have been associated with a particular usage category 146, the device management system 119 may determine that the application 129 is associated with that usage category 146. As a non-limiting example of such an embodiment, if the name of an application 129 includes the text “map,” and if the text “map” has been associated with the “navigation” usage category 146, the device management system 119 may determine that the application 129 is associated with the “navigation” usage category 146.


In another embodiment, the device management system 119 may identify a usage category 146 for an application 129 based on the types of operations that may be performed by the application 129. In this embodiment, the device management system 119 may assign an application 129 a particular usage category 146 if the application 129 may perform one or more operations that are associated with the usage category 146. For example, if an application 129 may perform several operations that involve a GPS for a client device 106, the device management system 119 may determine that the usage category 146 for the application 129 is the “navigation” usage category 146.


In some embodiments, the device management system 119 may facilitate an administrator defining new usage categories 146. To this end, the management console 123 may generate one or more user interfaces that facilitate the administrator inputting data that defines a usage category 146.


After the one or more usage categories 146 for an application 129 have been identified and the operations represented in the assembly and/or intermediary code for the application 129 have been identified, the device management system 119 may begin the process of determining whether the application 129 complies with the one or more profiles 139 that have been assigned to the one or more usage categories 146. As previously mentioned, profiles 139 may be assigned to respective usage categories 146, and each profile 139 may include one or more rules 143 that specify whether an application 129 is permitted to perform particular operations. In some embodiments, the device management system 119 may provide predefined profiles 139 that are assigned to respective usage categories 146. The device management system 119 may facilitate an administrator modifying one or more of the predefined profiles 139 in some embodiments. Additionally or alternatively, the device management system 119 may facilitate an administrator creating and modifying new profiles 139.


For embodiments in which multiple usage categories 146 are associated with an application 129, the device management system 119 may combine the profiles 139 assigned to the multiple usage categories 146 for the purpose of processing the application 129. For example, if the rules 143 for a first profile 139 specify that an application 129 (i) is not permitted to access user contact data and (ii) is not permitted to access a GPS, and a second profile 139 has a single rule 143 that specifies that an application 129 is not permitted to access a GPS, the device management system 119 may perform the logical conjunction operator (e.g., the logical “AND” operator) on the rules 143 for both profiles 139 to generate a combined profile 139. In other words, only the rules 143 that are included in both profiles 139 would be included in the combined profile 139 for this example. The combined profile 139 in this example would have a single rule 143 that specifies that an application 129 is not permitted to access a GPS. It is understood that other logical operators may be used to combine profiles 139 in alternative embodiments.


After obtaining the profile 139 for the application 129, the application scanning engine 126 may count how many times each rule 143 is violated by the operations represented in the assembly and/or intermediary code for the application 129. The application scanning engine 126 may determine that a violation exists if, for example, a rule 143 prohibits a particular operation and if the code indicates that the particular operation is performed by the application 129. As another example, a violation may be detected if a rule 143 requires that a particular operation be performed and if the particular operation is not represented in the code for the application 129. Thus, if a profile 139 has a first rule 143, a second rule 143, and a third rule 143, the application scanning engine 126 may identify how many times the first rule 143 is violated, how many times the second rule 143 is violated, and how many times the third rule 143 is violated by the operations represented in the assembly and/or intermediary code.


In some embodiments, the application scanning engine 126 may also count how many times one or more rules 143 for the respective levels of risk have been violated. For example, if each rule 143 for a profile 139 has been assigned either a “high,” “medium,” or “low” level of risk, the application scanning engine 126 may count how many times a “high” level of risk rule 143 has been violated, how many times a “medium” level of risk rule 143 has been violated, and how many times a “low” level of risk rule 143 has been violated.


The device management system 119 may generate one or more reports and/or perform other actions. Information from a report may be encoded and rendered for display so that an administrator or another user may be presented with the information in the report. In one embodiment, a report includes information representing the number of times that each rule 143 in the corresponding profile 139 has been violated. Additionally or alternatively, a report may represent the total number of violations of the rules 143 for the profile 139. Furthermore, some reports may include the number of violations that have been identified for each set of rules 143 that has been assigned a particular level of risk.


The device management system 119 may initiate one or more actions in response to one or more rules 143 for a profile 139 being violated. In some embodiments, an action may be initiated upon the total number of violations satisfying a predetermined threshold. As a non-limiting example, an action may be initiated if more than N total violations are identified, where N is a predefined number. In alternative embodiments, the remedial action may be initiated upon the number of violations for a set of rules 143 that are assigned a particular level of risk satisfying a predetermined threshold. As a non-limiting example, the device management system 119 may initiate an action if more than M relatively high risk violations have been identified, where M is a predefined number.


By the device management system 119 initiating various actions in response to one or more rules 143 for a profile 139 being violated, the device management system 119 may protect the client device 106, the computing environment 103, and/or other devices from being affected by an application 129 that is malicious and/or poorly designed. For example, in one embodiment, the computing environment 103 may stop communicating with the client device 106 and/or reject communication requests from the client device 106. In another embodiment, the management console 123 may generate and encode a message to be presented to an administrator for the device management system 119. Such a message may inform the administrator that the client device 106 has an application 129 that does not comply with a profile 139.


In some embodiments, the device management system 119 may transmit a command to the management component 149 in the client device 106 in response to one or more rules 143 for a profile 139 being violated. Such a command may cause the management component 149 to initiate the uninstallation of the application 129. Additionally, the command may cause the management component 149 to initiate another application 129, such as an application 129 that is similar to the uninstalled application 129 but that has been previously determined to comply with a profile 139, being installed in the client device 106. Thus, the non-compliant application 129 may be automatically replaced with a similar application 129 that is compliant with a profile 139.


In another embodiment, the command may instruct the management component 149 to cause data in the client device 106 to become inaccessible to applications 129 in the client device 106. For example, the management component 149 may cause data to be deleted or may cause data to become encrypted.


In another embodiment, the command may instruct the management component 149 to cause a message to be presented to a user of the client device 106. Such a message may, for example, inform the user that the application 129 has been identified as violating a profile 139. Additionally, the message may suggest that the application 129 be uninstalled and/or recommend another application 129 to be installed in its place.


Additionally, in some embodiments, the device management system 119 may transmit one or more commands to multiple client devices 106 that are managed by the device management system 119 and that have the application 129 that has been deemed noncompliant with the profile 139. In this way, once an application 129 has been deemed noncompliant, the device management system 119 may initiate remedial action for all of the client devices 106 that have that application 129.


In some instances, multiple client devices 106 may be associated with a particular user. For example, a user may operate a mobile phone and a tablet computer that are both managed by the device management system 119. In some embodiments, if one client device 106 associated with a user has an application 129 that is deemed non-compliant with a profile 139, the device management system 119 may initiate a remedial action for all of the client devices 106 that are associated with the user. For instance, if an application 129 in one client device 106 of the user is non-compliant with a profile 139, the device management system 119 may transmit one or more commands to all client devices 106 associated with the user to cause at least some data to become inaccessible to all of the client devices 106.


If the number of violations for one or more rules 143 or one or more sets of rules 143 is less than a predefined threshold, the device management system 119 may determine that the application 129 is compliant with the profile 139. In some embodiments, the device management system 119 may assign a certification designation to the application 129 to indicate to users of the client devices 106 and/or administrators of the device management system 119 that the application 129 has been deemed compliant with a profile 139 and therefore is believed to present a relatively low security risk.


After an application 129 has been processed by the application scanning engine 126, the results and other associated data may be stored in the data store 116 for various uses. For example, if the application scanning engine 126 determines that an application 129 complies with a profile 139, data representing the application 129, such as the identity of the application 129, the assembly and/or intermediary code, and/or other data, may be stored in conjunction with data for other applications 129 that have been deemed compliant with one or more profiles 139. Similarly, if the application scanning engine 126 determines that an application 129 violates a profile 139, data representing the application 129, such as the identity of the application 129, the assembly and/or intermediary code, and/or other data, may be stored in conjunction with other applications 129 that have been deemed noncompliant with one or more profiles 139.


The device management system 119 may also facilitate the distribution of applications 129 that have been previously deemed compliant with a particular profile 139. To this end, the device management system 119 may obtain a list of the applications 129 that have been deemed compliant with a particular profile 139, and the management console 123 may present this list of applications 129 to an administrator of the device management system 119. These applications 129 may also be represented in the private application data 137 in the data store 116. The management console 123 may facilitate the administrator selecting one or more of the applications 129 for being made available through a private application repository that is provided by the device management system 119. Upon an application 129 being selected by the administrator, the device management system 119 may include the application 129 in the private application repository, and client devices 106 that are managed by the device management system 119 may obtain and install the application 129 through the private application repository. In some embodiments, once an application 129 has been determined to be compliant with one or more profiles 139, the device management system 119 may make the application 129 available through the private application repository. A user may access the private application repository and initiate the installation of the application 129 through the private application repository.


Embodiments of the present disclosure may also use information associated with applications 129 that have been previously processed by the application scanning engine 126 to facilitate identifying other applications 129 that may be malicious and/or poorly designed and therefore be potential security risks for the computing environment 103, the client devices 106, and/or other devices. To this end, the application scanning engine 126 may execute a machine learning system. Such a machine learning system may comprise, for example, one or more artificial neural networks that may be configured to detect patterns.


Assembly and/or intermediary code for applications 129 that have been deemed to violate a profile 139 may be input into the machine learning system in order to train the machine learning system to identify characteristics that are indicative of applications 129 that violate the profile 139. After the machine learning system has been trained, another application 129 may be input into the machine learning system, and the machine learning system may determine whether the identified characteristics are present in this application 129. In this way, machine learning techniques may be employed to identify applications 129 that pose a risk to the security of the computing environment 103, the client devices 106, and/or other devices. As a non-limiting example, the machine learning system may learn that applications 129 that have been developed by a particular developer are likely to be non-compliant with a profile 139.


Additionally, the machine learning system may be retrained from time to time. Furthermore, applications 129 that have been deemed to comply with a profile 139 may be used to train and/or refine the machine learning system and to identify characteristics that are indicative of applications 129 that comply with the profile 139. Once the machine learning system has been trained, the machine learning system may be used to identify applications 129 that do not pose a risk to the security of the computing environment 103, the client devices 106, and/or other devices.


Referring next to FIG. 2, shown is an illustration of an example of a user interface 200a that may be encoded and rendered by the management console 123 (FIG. 1) in the device management system 119 (FIG. 1) according to various embodiments of the present disclosure. The user interface 200a shown in FIG. 2 may be generated after an application 129 (FIG. 1) has been processed by the application scanning engine 126 (FIG. 1). In particular, the user interface 200a may present to an administrator of the device management system 119 at least a portion of the information of a report that is generated by the application scanning engine 126.


As shown in FIG. 2, the user interface 200a includes information that identifies the name 203 and the version 206 of the application 129 that was processed by the application scanning engine 126. Additionally, the user interface 200a identifies the usage category 146 (FIG. 1) that has been associated with the application 129. For the embodiment shown in FIG. 2, the user interface 200a also includes a change category button 209 and a change policy button 213. Upon the administrator selecting the change category button 209, another user interface may be generated that facilitates the administrator selecting another predefined usage category 146. In response to another usage category 146 being selected, the application 129 may be processed using the corresponding profile 139. After the application 129 has been processed using that profile 139 (FIG. 1), information for at least a portion of the report may be presented in the user interface 200a.


The change policy button 213 may facilitate the administrator modifying the existing profile 139 and/or creating a new profile 139 for the usage category 146. To this end, in response to the change policy button 213 being selected, one or more user interfaces may be generated that, for example, facilitate the administrator selecting and/or defining one or more rules 143 (FIG. 1) for the profile 139.


The user interface 200a also includes a first region 216 and a second region 219 that present information for at least a portion of the report generated by the application scanning engine 126. For the embodiment shown, each rule 143 has been associated with one of three levels of risk. The user interface 200a represents the levels of risk as being “high level,” “medium level,” or low level.” The first region 216 presents how many violations have been identified for each level of risk. In the example shown, the device management system 119 has identified four violations of one or more rules 143 that have been assigned a “high” level of risk, three violations of one or more rules 143 that have been assigned a “medium” level of risk, and six violations of one or more rules 143 that have been assigned a “low” level of risk. In addition, the total number of violations that have been identified is presented in the first region 216. The second region 219 shown in FIG. 2 presents descriptions of the violations of the rules 143 that are associated with the “high” level of risk.


The user interface 200a also includes a view detailed report button 223 and a notify developer button 226. Upon the administrator selecting the view detailed report button 223, another user interface that includes additional information from the report may be generated. In response to the administrator selecting the notify developer button 226, the device management system 119 may transmit at least a portion of the report generated by the application scanning engine 126 to the developer of the application 129. To this end, the device management system 119 may obtain from the public application repository 153 (FIG. 1) contact information (e.g., an email address) for the developer of the application 129, and information from the report may be transmitted to the developer using the contact information. The developer may use this information to remedy the violations that have been identified.


Referring next to FIG. 3, shown is an illustration of an example of a user interface 200b that may be encoded and rendered by the management console 123 (FIG. 1) in the device management system 119 (FIG. 1) according to various embodiments of the present disclosure. The user interface 200b shown in FIG. 3 may be generated in response to the view detailed report button 223 (FIG. 2) being selected by an administrator for the device management system 119.


As shown in FIG. 3, the user interface 200b includes information that identifies the name 203 and the version 206 of the application 129 (FIG. 1) that was processed by the application scanning engine 126 (FIG. 1). Additionally, the user interface 200b includes a third region 303 that presents at least a portion of the information for the report generated by the application scanning engine 126. In particular, the third region 303 includes information associated with each violation of a rule 143 that has been identified by the application scanning engine 126. For each violation of a rule 143, the third region 303 presents the level of risk associated with the violation (e.g., “high risk,” “medium risk,” “low risk”) as well as one or more details associated with the violation. In this way, the user interface 200b may present information that an administrator for the device management system 119 may use to determine whether changes to the profiles 139 (FIG. 1) and/or client device 106 (FIG. 1) should be made. For example, the administrator may view the information presented in the user interface 200b and determine to modify one or more rules 143 for the profile 139 and/or to prohibit an application 129 from being installed in a client device 106 that is managed by the device management system 119.


Referring next to FIG. 4, shown is an illustration of an example of a user interface 200c that may be encoded and rendered by the management console 123 (FIG. 1) in the device management system 119 (FIG. 1) according to various embodiments of the present disclosure. The user interface 200c may be rendered to present to an administrator for the device management system 119 information associated with the applications 129 (FIG. 1) that are installed in the client devices 106 (FIG. 1) that are managed by the device management system 119.


As shown in FIG. 4, the user interface 200c includes a fourth region 403 that presents various information associated with the applications 129 installed in the client devices 106 that are managed by the device management system 119. For example, the fourth region 403 in FIG. 4 presents the names of the applications 129, the number of client devices 106 that have the respective applications 129 installed, the number of violations that have previously been detected for the respective applications 129, and potentially other information. An administrator for the device management system 119 may view the information presented in the fourth region 403 to determine whether an application 129 installed in multiple client devices 106 violates a profile 139 (FIG. 1) and thus may pose a risk to the device management system 119 and/or other devices.


Referring next to FIG. 5, shown is a flowchart that provides an example of the operation of a portion of the device management system 119 according to various embodiments. In particular, FIG. 5 provides an example of the device management system 119 processing an application 129 (FIG. 1) and initiating an action if the violations exceed a predetermined threshold. It is understood that the flowchart of FIG. 5 provides merely an example of the many different types of functional arrangements that may be employed to implement the operation of the portion of the device management system 119 as described herein. As an alternative, the flowchart of FIG. 5 may be viewed as depicting an example of elements of a method implemented in the computing environment 103 (FIG. 1) according to one or more embodiments.


Beginning with box 503, the device management system 119 identifies an application 129 that is already installed or that may be installed later in a client device 106 (FIG. 1). To this end, the device management system 119 may receive a list of applications 129 that are installed in a client device 106, and one or more of the applications 129 may be selected for processing.


At box 506, the device management system 119 then determines whether a report has already been generated for the identified application 129. If so, the device management system 119 moves to box 535. Otherwise, the device management system 119 proceeds to box 509 and obtains a compiled version of the application 129. In some embodiments, the device management system 119 may retrieve the compiled version of the application 129 from the public application repository 153 (FIG. 1).


The device management system 119 then decompiles the compiled version of the application 129 and generates assembly and/or intermediary code using the decompiler 133 (FIG. 1), as indicated at box 513. Next, at box 516, the code analyzer 136 (FIG. 1) may be used to identify the operations that are represented in the assembly and/or intermediary code for the application 129.


At box 519, the device management system 119 identifies the usage category 146 (FIG. 1) for the application 129. In some embodiments, data representing the usage category 146 may be provided by the public application repository 153. In other embodiments, an administrator for the device management system 119 may manually input the usage category 146. Alternatively, the device management system 119 may identify the usage category 146 using the name of the application 129 and/or metadata associated with the application 129, as described above.


The device management system 119 then moves to box 523 and obtains the profile 139 (FIG. 1) that has been assigned to the identified usage category 146. As discussed above, a profile 139 may comprise a set of one or more rules 143 (FIG. 1) that may specify whether various operations are permitted to be performed by an application 129 in a client device 106. At box 526, the device management system 119 determines the number of times each rule 143 is violated by an operation represented in the assembly and/or intermediary code. For example, if a profile 139 has a first rule 143, a second rule 143, and a third rule 143, the application scanning engine 126 may identify how many times the first rule 143 is violated, how many times the second rule 143 is violated, and how many times the third rule 143 is violated by the operations represented in the code.


At box 529, the device management system 119 generates a report that may include, for example, the number of times each rule 143 is violated. At least a portion of the report may be encoded and rendered in a user interface, as discussed above. Next, the device management system 119 stores the report and associated data, such as the assembly and/or intermediary code and information indicating whether the application 129 complies with the profile 139, in the data store 116 (FIG. 1), as indicated at box 533. Data from the report may be used to determine whether the application 129 is compliant with other profiles 139, without requiring the application scanning engine 126 to decompile and process the application 129 again.


At box 535, the device management system 119 encodes and displays at least a portion of the information in the report in one or more user interfaces. For example, the one or more user interfaces may present the total number of times that the rules 143 for a profile 139 have been violated, the number of times that sets of rules 143 associated with the respective levels of risk have been violated, descriptions of the rules 143 that have been violated, etc. In this way, the device management system 119 may provide an administrator with information that facilitates the administrator deciding whether the application 129 is a security risk.


Next, the device management system 119 moves to box 536. At box 536, the device management system 119 determines whether the number of violations for the application 129 exceeds a predefined threshold. In one embodiment, the threshold may be satisfied if more than N total violations are identified, where N is a predefined number. In alternative embodiments, the threshold may be satisfied if the number of violations that are assigned a particular level of risk satisfies a predetermined threshold.


If the violations do not exceed the predefined threshold, the process ends. Otherwise, the device management system 119 moves to box 539 and initiates a remedial action. In one embodiment, the computing environment 103 may stop communicating with the client device 106 and/or reject communication requests from the client device 106. In another embodiment, the device management system 119 may generate and encode a message to be presented to an administrator for the device management system 119. Such a message may inform the administrator that the client device 106 has an application 129 that does not comply with a profile 139. In other embodiments, the device management system 119 may transmit a command to the management component 149 (FIG. 1) to cause the management component 149 to perform an action. Thereafter, the process ends.


Referring next to FIG. 6, shown is a flowchart that provides another example of the operation of a portion of the device management system 119 according to various embodiments. In particular, FIG. 6 provides an example of the device management system 119 identifying applications 129 (FIG. 1) that are in compliance with a profile 139 (FIG. 1) and making the identified applications 129 available for distribution through a private application repository. It is understood that the flowchart of FIG. 6 provides merely an example of the many different types of functional arrangements that may be employed to implement the operation of the portion of the device management system 119 as described herein. As an alternative, the flowchart of FIG. 6 may be viewed as depicting an example of elements of a method implemented in the computing environment 103 (FIG. 1) according to one or more embodiments.


Beginning at box 603, the device management system 119 obtains a profile 139. As discussed above, a profile 139 may comprise a set of one or more rules 143 (FIG. 1) that may specify whether various operations are permitted to be performed by an application 129 in a client device 106. At box 606, the device management system 119 then identifies one or more applications 129 that comply with the profile 139. As discussed with respect to FIG. 5, after the application scanning engine 126 (FIG. 1) has processed an application 129, the device management system 119 may store information indicating whether the application 129 complies with a profile 139. As such, the device management system may obtain a list of applications 129 that have been deemed as complying with the profile 139.


Next, the device management system 119 moves to box 609 and encodes one or more user interfaces with representations of the applications 129 that have been identified. Thus, the one or more user interfaces may present several applications 129 that have been deemed compliant with the profile 139 to an administrator for the device management system 119. Additionally, the one or more user interfaces may facilitate the administrator selecting one or more of the presented applications 129 for making available for distribution to the client devices 106. An application 129 may be selected, for example, by the administrator using an input device, such as a mouse or touch pad, to select a user interface element (e.g., a check box, an image, etc.) associated with the presented application 129.


At box 613, the device management system 119 obtains a selection of one or more of the applications 129 that are presented in the one or more user interfaces. Thereafter, the device management system 119 moves to box 616 and associates the selected one or more applications 129 with the private application repository. In this way, the selected applications 129 may be made available for distribution through the private application repository, and a client device 106 may obtain and install one or more of these applications 129 through the private application repository. Thereafter the process ends.


With reference to FIG. 7, shown is a schematic block diagram of the computing environment 103 according to an embodiment of the present disclosure. The computing environment 103 includes one or more computing devices 700. Each computing device 700 includes at least one processor circuit having, for example, a processor 703 and a memory 706, both of which are coupled to a local interface 709. As such, each computing device 700 may comprise, for example, at least one server computer or like device. The local interface 709 may comprise, for example, a data bus with an accompanying address/control bus or other bus structure as can be appreciated.


Stored in the memory 706 are both data and several components that are executable by the processor 703. In particular, stored in the memory 706 and executable by the processor 703 is the device management system 119 and potentially other systems. Also stored in the memory 706 may be a data store 116 and other data. In addition, an operating system may be stored in the memory 706 and executable by the processor 703.


It is understood that there may be other applications that are stored in the memory 706 and are executable by the processor 703 as can be appreciated. Where any component discussed herein is implemented in the form of software, any one of a number of programming languages may be employed such as, for example, C, C++, C#, Objective C, Java®, JavaScript®, Perl, PHP, Visual Basic®, Python®, Ruby, Flash®, or other programming languages.


A number of software components are stored in the memory 706 and are executable by the processor 703. In this respect, the term “executable” means a program file that is in a form that can ultimately be run by the processor 703. Examples of executable programs may be, for example, a compiled program that can be translated into machine code in a format that can be loaded into a random access portion of the memory 706 and run by the processor 703. An executable program may be stored in any portion or component of the memory 706 including, for example, random access memory (RAM), read-only memory (ROM), a hard drive, a solid-state drive, a flash drive, a memory card, an optical disc such as compact disc (CD) or digital versatile disc (DVD), floppy disk, magnetic tape, or other memory components.


The memory 706 is defined herein as including both volatile and nonvolatile memory components. Volatile components are those that do not retain data values upon loss of power. Nonvolatile components are those that retain data values upon a loss of power. Thus, the memory 706 may comprise, for example, random access memory (RAM), read-only memory (ROM), hard disk drives, solid-state drives, flash drives, memory cards accessed via a memory card reader, floppy disks accessed via an associated floppy disk drive, optical discs accessed via an optical disc drive, magnetic tapes accessed via an appropriate tape drive, and/or other memory components, or a combination of any two or more of these memory components. In addition, the RAM may comprise, for example, static random access memory (SRAM), dynamic random access memory (DRAM), or magnetic random access memory (MRAM) and other such devices. The ROM may comprise, for example, a programmable read-only memory (PROM), an erasable programmable read-only memory (EPROM), an electrically erasable programmable read-only memory (EEPROM), or other like memory device.


Also, the processor 703 may represent multiple processors 703 and/or multiple processor core, and the memory 706 may represent multiple memories 706 that operate in parallel processing circuits, respectively. In such a case, the local interface 709 may be an appropriate network that facilitates communication between any two of the multiple processors 703, between any processor 703 and any of the memories 706, or between any two of the memories 706, etc. The local interface 709 may comprise additional systems designed to coordinate this communication, including, for example, performing load balancing. The processor 703 may be of electrical or of some other available construction.


Although the device management system 119 and other various systems described herein may be embodied in software or code executed by general purpose hardware as discussed above, as an alternative, the device management system 119 and other systems may also be embodied in dedicated hardware or a combination of software/general purpose hardware and dedicated hardware. If embodied in dedicated hardware, each can be implemented as a circuit or state machine that employs any one of or a combination of a number of technologies. These technologies may include, but are not limited to, discrete logic circuits having logic gates for implementing various logic functions upon an application of one or more data signals, application specific integrated circuits (ASICs) having appropriate logic gates, field-programmable gate arrays (FPGAs), and/or other suitable components. Such technologies are generally well known by those skilled in the art and, consequently, are not described in detail herein.


The FIGS. 5-6 show examples of the functionality and operation of an implementation of portions of the device management system 119. If embodied in software, each block may represent a module, segment, or portion of code that comprises program instructions to implement the specified logical function(s). If embodied in hardware, each block may represent a circuit or a number of interconnected circuits to implement the specified logical function(s).


Although the flowcharts of FIGS. 5-6 show a specific order of execution, it is understood that the order of execution may differ from that which is depicted. For example, the order of execution of two or more boxes may be scrambled relative to the order shown. Also, two or more boxes shown in succession in FIGS. 5-6 may be executed concurrently or with partial concurrence. Further, in some embodiments, one or more of the boxes shown in FIGS. 5-6 may be skipped or omitted. In addition, any number of counters, state variables, warning semaphores, or messages might be added to the logical flow described herein, for purposes of enhanced utility, accounting, performance measurement, or providing troubleshooting aids, etc. It is understood that all such variations are within the scope of the present disclosure.


Also, any logic or application described herein, including the device management system 119, that comprises software or code can be embodied in any non-transitory computer-readable medium for use by or in connection with an instruction execution system such as, for example, a processor 703 in a computer system or other system. In this sense, the logic may comprise, for example, statements including instructions and declarations that can be fetched from the computer-readable medium and executed by the instruction execution system. In the context of the present disclosure, a “computer-readable medium” can be any medium that can contain, store, or maintain the logic or application described herein for use by or in connection with the instruction execution system.


The computer-readable medium can comprise any one of many physical media such as, for example, magnetic, optical, or semiconductor media. More specific examples of a suitable computer-readable medium would include, but are not limited to, magnetic tapes, magnetic floppy diskettes, magnetic hard drives, memory cards, solid-state drives, flash drives, or optical discs. Also, the computer-readable medium may be a random access memory (RAM) including, for example, static random access memory (SRAM) and dynamic random access memory (DRAM), or magnetic random access memory (MRAM). In addition, the computer-readable medium may be a read-only memory (ROM), a programmable read-only memory (PROM), an erasable programmable read-only memory (EPROM), an electrically erasable programmable read-only memory (EEPROM), or other type of memory device.


Further, any logic or application described herein, including the device management system 119, may be implemented and structured in a variety of ways. For example, one or more applications described may be implemented as modules or components of a single application. Further, one or more applications described herein may be executed in shared or separate devices or a combination thereof. For example, a plurality of the applications described herein may execute in the same computing device 700, or in multiple computing devices 700 in the same computing environment 103. Additionally, it is understood that terms, such as “application,” “service,” “system,” “engine,” “module,” and so on, may be interchangeable and are not intended to be limiting.


Disjunctive language such as the phrase “at least one of X, Y, or Z,” unless specifically stated otherwise, is otherwise understood with the context as used in general to present that the term may be either X, Y, or Z, or any combination thereof (e.g., X, Y, and/or Z). Thus, such disjunctive language is not generally intended to, and should not, imply that certain embodiments require at least one of X, at least one of Y, or at least one of Z to each be present.


It is emphasized that the above-described embodiments of the present disclosure are merely possible examples of implementations set forth for a clear understanding of the principles of the disclosure. Many variations and modifications may be made to the above-described embodiments without departing substantially from the spirit and principles of the disclosure. All such modifications and variations are intended to be included herein within the scope of this disclosure and protected by the following claims.

Claims
  • 1. A method for managed device risk assessment, comprising: decompiling a copy of an application installed on a client device that has a management component installed therein to interact with a management service, to identify a plurality of operations represented in decompiled code of the application;identifying a usage category for the application;obtaining a profile for the usage category, the profile comprising a plurality of rules each associated with one of different levels of risk, the plurality of rules including a first rule associated with a high level of risk, and a second rule associated with a low level of risk;parsing the decompiled code to identify, with at least one computing device, a first number of violations of the first rule, wherein the first number indicates times that a first subset of the plurality of operations represented in the decompiled code are in violation of the first rule;parsing the decompiled code to identify, with the at least one computing device, a second number of violations of the second rule, wherein the second number indicates times that a second subset of the plurality of operations represented in the decompiled code are in violation of the second rule;determining that a total of the first number of violations and the second number of violations exceeds a predetermined threshold for a total number of violations by the operations represented in the decompiled code;initiating a remedial action in response to determining that the total number of violations exceeds the predetermined threshold, wherein initiating the remedial action comprises transmitting a command to the management component of the client device to wrap the application with an application wrapper that encapsulates the application as an intermediary executable that restricts at least one of the plurality of operations to enforce a compliance rule that is identified based at least in part on the profile for the usage category, and rejecting communications from the client device; andtransmitting data for the application to a machine learning system to train the machine learning system to identify at least one characteristic that indicates a violation of the profile.
  • 2. The method of claim 1, wherein the application wrapper enables an additional functionality.
  • 3. The method of claim 2, wherein the additional functionality comprises adding, using the application wrapper, an authentication requirement to access the application.
  • 4. The method of claim 1, wherein the application wrapper wraps the application without recompilation of source code of the application.
  • 5. The method of claim 1, wherein initiating the remedial action comprises causing data associated with the application on the client device to be encrypted.
  • 6. The method of claim 1, wherein decompiling the application comprises: decompiling a compiled version of the application to generate intermediate or assembly code of the application; andidentifying the plurality of operations in the intermediate or assembly code.
  • 7. The method of claim 1, further comprising generating a report that presents the first number of times that the first rule is violated and the second number of times that the second rule is violated.
  • 8. The method of claim 7, further comprising transmitting a notification of the report to a developer of the application, the notification indicating that the application violates the profile.
  • 9. The method of claim 1, further comprising obtaining the copy of the application in response to identifying that the application is installed on the client device, the client device being managed by a device management system.
  • 10. A non-transitory computer-readable medium embodying program code thereon that, when executed by at least one computing device, directs the at least one computing device to at least: decompile a copy of an application installed on a client device that has a management component installed therein to interact with a management service, to generate decompiled code and identify a plurality of operations represented in the decompiled code of the application;identify a usage category for the application;obtain a profile for the usage category, the profile comprising a plurality of rules each associated with one of different levels of risk, the plurality of rules including a first rule associated with a high level of risk and a second rule associated with a low level of risk;parse the decompiled code to determine a first number of violations of the first rule, wherein the first number indicates times that a first subset of the plurality of operations represented in the decompiled code are in violation of the first rule;parse the decompiled code to determine a second number of violations of the second rule, wherein the second number indicates times that a second subset of the plurality of operations represented in the decompiled code are in violation of the first rule;determine that a total of the first number of violations and the second number of violations exceeds a predetermined threshold for a total number times violations by the operations represented in the decompiled code;initiate a remedial action in response to the total number of violations exceeding the predetermined threshold, wherein the remedial action is initiated by transmitting a command to the management component of the client device to wrap the application with an application wrapper that encapsulates the application to modify at least one of the plurality of operations to enforce a rule associated with the profile for the usage category, and rejecting communications from the client device; andtransmit data for the application to a machine learning system to train the machine learning system to identify at least one characteristic that indicates a violation of the profile.
  • 11. The non-transitory computer-readable medium of claim 10, wherein the application wrapper enables an additional functionality.
  • 12. The non-transitory computer-readable medium of claim 11 wherein the additional functionality comprises adding, using the application wrapper, an authentication requirement to access the application.
  • 13. The non-transitory computer-readable medium of claim 10, wherein, to decompile the application, the at least one computing device is further directed to: decompile a compiled version of the application to generate intermediate or assembly code of the application; andidentify the plurality of operations in the intermediate or assembly code.
  • 14. The non-transitory computer-readable medium of claim 10, wherein the at least one computing device is further directed to generate a report that presents the first number of times that the first rule is violated and the second number of times that the second rule is violated.
  • 15. The non-transitory computer-readable medium of claim 14, wherein the at least one computing device is further directed to transmit a notification of the report to a developer of the application, the notification indicating that the application violates the profile.
  • 16. The non-transitory computer-readable medium of claim 10, wherein the at least one computing device is further directed to obtain the copy of the application in response to the application being installed on the client device.
CROSS-REFERENCE TO RELATED APPLICATIONS

This application is a continuation of U.S. Non-Provisional application Ser. No. 14/498,115, filed Sep. 26, 2014, titled “Risk Assessment for Managed Client Devices,” which claims priority to U.S. Provisional Application No. 61/877,623, titled “Software Application Scanning and Reputation Analysis,” which was filed on Sep. 13, 2013, and U.S. Provisional Application No. 61/943,128, titled “Application Policy Management,” which was filed on Feb. 21, 2014, the each of which is hereby incorporated by reference herein in its entirety.

US Referenced Citations (383)
Number Name Date Kind
4961225 Hisano Oct 1990 A
5005152 Knutsen Apr 1991 A
5278951 Camacho Jan 1994 A
5287548 Flood Feb 1994 A
5377315 Leggett Dec 1994 A
5412538 Kikinis May 1995 A
5546359 Aarseth Aug 1996 A
5586328 Caron Dec 1996 A
5615393 Kikinis Mar 1997 A
5628031 Kikinis May 1997 A
5678039 Hinks Oct 1997 A
5819097 Brooks Oct 1998 A
5841947 Nordin Nov 1998 A
5854924 Rickel Dec 1998 A
5860008 Bradley Jan 1999 A
5881290 Ansari Mar 1999 A
5920720 Toutonghi et al. Jul 1999 A
5933498 Schneck Aug 1999 A
5933635 Holzle Aug 1999 A
5973696 Agranat Oct 1999 A
5978451 Swan Nov 1999 A
6006328 Drake Dec 1999 A
6026235 Shaughnessy Feb 2000 A
6071317 Nagel Jun 2000 A
6086622 Abe Jul 2000 A
6105131 Carroll Aug 2000 A
6151701 Humphreys Nov 2000 A
6236969 Ruppert May 2001 B1
6259726 Saadeh Jul 2001 B1
6317873 Townsend Nov 2001 B1
6320962 Eisenbraun Nov 2001 B1
6456308 Agranat Sep 2002 B1
6546454 Levy et al. Apr 2003 B1
6668325 Collberg Dec 2003 B1
6732168 Bearden May 2004 B1
6804682 Kemper et al. Oct 2004 B1
6968334 Salmenkaita Nov 2005 B2
6973460 Mitra Dec 2005 B1
7020677 Baillif Mar 2006 B1
7370318 Howe May 2008 B1
7426721 Saulpaugh Sep 2008 B1
7467142 Sinn Dec 2008 B2
7478435 Sueyoshi Jan 2009 B2
7676791 Hamby Mar 2010 B2
7680819 Mellmer Mar 2010 B1
7693970 Eidler Apr 2010 B2
7735127 Rive Jun 2010 B1
7743336 Louch Jun 2010 B2
7765539 Elliott Jul 2010 B1
7797733 Sallam Sep 2010 B1
7823145 Le Oct 2010 B1
7840951 Wright Nov 2010 B1
7890869 Mayer Feb 2011 B1
7891000 Rangamani et al. Feb 2011 B1
7899722 Lawrence Mar 2011 B1
7930411 Hayward Apr 2011 B1
7934248 Yehuda Apr 2011 B1
7937757 Focke May 2011 B2
7945958 Amarasinghe May 2011 B2
7962961 Griffin Jun 2011 B1
7975256 Atkin Jul 2011 B2
7996373 Zoppas Aug 2011 B1
8041749 Beck Oct 2011 B2
8055904 Cato Nov 2011 B1
8095643 Goto Jan 2012 B2
8224840 Bao Jul 2012 B2
8272061 Lotem Sep 2012 B1
8332908 Hatakeyama Dec 2012 B2
8406141 Couturier Mar 2013 B1
8489609 Ong Jul 2013 B1
8499355 Goncharov Jul 2013 B1
8561180 Nachenberg Oct 2013 B1
8578339 Day Nov 2013 B2
8607340 Wright Dec 2013 B2
8621550 Yehuda Dec 2013 B1
8621636 Bailey, Jr. Dec 2013 B2
8677507 Ginter Mar 2014 B2
8756698 Sidagni Jun 2014 B2
8832259 Kearns Sep 2014 B1
8849685 Oden Sep 2014 B2
8892875 Baldwin Nov 2014 B1
8949991 Beskrovny Feb 2015 B2
8955038 Nicodemus Feb 2015 B2
9015706 Conover Apr 2015 B2
9027004 Johnson May 2015 B1
9043480 Barton May 2015 B2
9043793 Field May 2015 B1
9185554 Nagpal Nov 2015 B2
9274930 Morrison Mar 2016 B2
9282114 Dotan Mar 2016 B1
9298925 Crittall Mar 2016 B1
9426169 Zandani Aug 2016 B2
9489697 Guerrieri Nov 2016 B1
9569201 Lotem Feb 2017 B1
9578137 Sham Feb 2017 B1
9665393 Johnson May 2017 B1
9697019 Fitzgerald Jul 2017 B1
9747185 Fine Aug 2017 B2
9779250 Hui Oct 2017 B1
9779253 Mahaffey Oct 2017 B2
9824224 Furuichi Nov 2017 B2
9928379 Hoffer Mar 2018 B1
9934384 Johansson Apr 2018 B2
9971891 Bowen May 2018 B2
10042658 Day Aug 2018 B1
20010041594 Arazi Nov 2001 A1
20010046850 Blanke Nov 2001 A1
20020004909 Hirano Jan 2002 A1
20020010679 Felsher Jan 2002 A1
20020016184 Helaine Feb 2002 A1
20020023212 Proudler Feb 2002 A1
20020038428 Safa Mar 2002 A1
20020147735 Nir Oct 2002 A1
20030004965 Farmer Jan 2003 A1
20030023961 Barsness Jan 2003 A1
20030037103 Salmi Feb 2003 A1
20030084294 Aoshima May 2003 A1
20030177363 Yokota Sep 2003 A1
20030225693 Ballard Dec 2003 A1
20040039728 Fenlon Feb 2004 A1
20040090949 So May 2004 A1
20040098749 Sansom May 2004 A1
20040103000 Owurowa May 2004 A1
20040111713 Rioux Jun 2004 A1
20040228613 Lin Nov 2004 A1
20040236694 Tattan Nov 2004 A1
20050010442 Kragh Jan 2005 A1
20050015667 Aaron Jan 2005 A1
20050026651 Chen Feb 2005 A1
20050027784 Fusari Feb 2005 A1
20050066165 Peled Mar 2005 A1
20050076209 Proudler Apr 2005 A1
20050079478 McKeagney Apr 2005 A1
20050102534 Wong May 2005 A1
20050114784 Spring May 2005 A1
20050130721 Gartrell Jun 2005 A1
20050132225 Gearhart Jun 2005 A1
20050138413 Lippmann Jun 2005 A1
20050144475 Sakaki Jun 2005 A1
20050147240 Agrawal Jul 2005 A1
20050148358 Lin Jul 2005 A1
20050228687 Houtani Oct 2005 A1
20050229256 Banzhof Oct 2005 A2
20050251863 Sima Nov 2005 A1
20050273854 Chess Dec 2005 A1
20050273859 Chess Dec 2005 A1
20050273860 Chess Dec 2005 A1
20050273861 Chess Dec 2005 A1
20050278714 Vahid Dec 2005 A1
20060021046 Cook Jan 2006 A1
20060021048 Cook Jan 2006 A1
20060021049 Cook Jan 2006 A1
20060021050 Cook Jan 2006 A1
20060026042 Awaraji Feb 2006 A1
20060047758 Sharma Mar 2006 A1
20060053134 Durham Mar 2006 A1
20060053265 Durham Mar 2006 A1
20060069688 Shekar Mar 2006 A1
20060080284 Masonis Apr 2006 A1
20060085852 Sima Apr 2006 A1
20060129830 Haller Jun 2006 A1
20060155764 Tao Jul 2006 A1
20060156022 Grim, III Jul 2006 A1
20060161750 Perkins Jul 2006 A1
20060174350 Roever Aug 2006 A1
20060178997 Schneck Aug 2006 A1
20060184666 Nozawa Aug 2006 A1
20060184829 Cheong Aug 2006 A1
20060212464 Pedersen Sep 2006 A1
20060252544 Liu Nov 2006 A1
20060253709 Cheng Nov 2006 A1
20060265348 Weerman Nov 2006 A1
20060272024 Huang Nov 2006 A1
20060277539 Amarasinghe Dec 2006 A1
20070038781 Joung Feb 2007 A1
20070050216 Wright Mar 2007 A1
20070055666 Newbould Mar 2007 A1
20070064943 Ginter Mar 2007 A1
20070067450 Malloy et al. Mar 2007 A1
20070078988 Miloushev et al. Apr 2007 A1
20070088753 Omoto Apr 2007 A1
20070101432 Carpenter May 2007 A1
20070143851 Nicodemus Jun 2007 A1
20070168998 Mehta Jul 2007 A1
20070174490 Choi Jul 2007 A1
20070180490 Renzi Aug 2007 A1
20070192863 Kapoor Aug 2007 A1
20070199050 Meier Aug 2007 A1
20070226794 Howcroft Sep 2007 A1
20070226796 Gilbert Sep 2007 A1
20070234070 Horning Oct 2007 A1
20070240138 Chess Oct 2007 A1
20070294766 Mir Dec 2007 A1
20080016410 Pu Jan 2008 A1
20080018518 Chan Jan 2008 A1
20080040774 Wang Feb 2008 A1
20080047016 Spoonamore Feb 2008 A1
20080051076 O'Shaughnessy et al. Feb 2008 A1
20080059269 Willis Mar 2008 A1
20080072325 Repasi Mar 2008 A1
20080080718 Meijer Apr 2008 A1
20080082538 Meijer Apr 2008 A1
20080098479 O'Rourke Apr 2008 A1
20080104276 Lahoti May 2008 A1
20080104393 Glasser May 2008 A1
20080120611 Aaron May 2008 A1
20080134156 Osminer et al. Jun 2008 A1
20080134177 Fitzgerald Jun 2008 A1
20080139191 Melnyk Jun 2008 A1
20080148056 Ginter Jun 2008 A1
20080163373 Maynard Jul 2008 A1
20080168135 Redlich Jul 2008 A1
20080168529 Anderson Jul 2008 A1
20080178300 Brown Jul 2008 A1
20080184203 Yan Jul 2008 A1
20080189250 Cha Aug 2008 A1
20080209395 Ernst Aug 2008 A1
20080209401 Fanning Aug 2008 A1
20080215968 Bekerman Sep 2008 A1
20080222734 Redlich Sep 2008 A1
20080229422 Hudis Sep 2008 A1
20080262990 Kapoor Oct 2008 A1
20090006575 Hulten Jan 2009 A1
20090036159 Chen Feb 2009 A1
20090077666 Chen Mar 2009 A1
20090103735 Aizu Apr 2009 A1
20090113550 Costa Apr 2009 A1
20090120653 Thomas May 2009 A1
20090144827 Peinado Jun 2009 A1
20090151007 Koster Jun 2009 A1
20090157869 Cleary Jun 2009 A1
20090158302 Nicodemus Jun 2009 A1
20090165103 Cho Jun 2009 A1
20090172650 Spurlin Jul 2009 A1
20090172651 Need Jul 2009 A1
20090217381 Helman Aug 2009 A1
20090228439 Manolescu et al. Sep 2009 A1
20090265761 Evanitsky Oct 2009 A1
20090265762 Evanitsky Oct 2009 A1
20090271863 Govindavajhala Oct 2009 A1
20090276860 Miyabashi Nov 2009 A1
20090291636 Morley-Smith Nov 2009 A1
20090300747 Ahn Dec 2009 A1
20090307764 Isobe Dec 2009 A1
20100082441 Doemling Apr 2010 A1
20100083240 Siman Apr 2010 A1
20100095277 Cheng et al. Apr 2010 A1
20100095381 Levi Apr 2010 A1
20100125911 Bhaskaran May 2010 A1
20100161816 Kraft Jun 2010 A1
20100189606 Burleson Jul 2010 A1
20100192228 Levi Jul 2010 A1
20100205657 Manring Aug 2010 A1
20100218233 Henderson Aug 2010 A1
20100241595 Felsher Sep 2010 A1
20100245585 Fisher Sep 2010 A1
20100250603 Balakrishnaiah Sep 2010 A1
20100287054 Zohar Nov 2010 A1
20100313188 Asipov Dec 2010 A1
20100325412 Norrman Dec 2010 A1
20110023022 Harper Jan 2011 A1
20110023115 Wright Jan 2011 A1
20110069839 Tsuruoka Mar 2011 A1
20110093954 Lee Apr 2011 A1
20110111764 Mueck May 2011 A1
20110126111 Gill May 2011 A1
20110131658 Bahl Jun 2011 A1
20110145894 Garcia Morchon Jun 2011 A1
20110154121 Dern Jun 2011 A1
20110176493 Kiiskila Jul 2011 A1
20110185431 Deraison Jul 2011 A1
20110197174 Wu Aug 2011 A1
20110208785 Burke Aug 2011 A1
20110209196 Kennedy Aug 2011 A1
20110209207 Issa Aug 2011 A1
20110231361 Patchava Sep 2011 A1
20110231408 Sasaki Sep 2011 A1
20110269424 Multer Nov 2011 A1
20110302564 Byers Dec 2011 A1
20110302634 Karaoguz Dec 2011 A1
20110320286 Zohar Dec 2011 A1
20120017205 Mahajan et al. Jan 2012 A1
20120023547 Maxson Jan 2012 A1
20120054136 Maulik Mar 2012 A1
20120066508 Lentini Mar 2012 A1
20120066745 Wuthnow Mar 2012 A1
20120095922 Wada Apr 2012 A1
20120109784 Marion May 2012 A1
20120110444 Li May 2012 A1
20120110671 Beresnevichiene May 2012 A1
20120136961 Chen May 2012 A1
20120137343 Uchida May 2012 A1
20120137372 Shin May 2012 A1
20120150773 DiCorpo Jun 2012 A1
20120159567 Toy Jun 2012 A1
20120159619 Berg Jun 2012 A1
20120192247 Oliver Jul 2012 A1
20120203649 Mishura Aug 2012 A1
20120210443 Blaisdell Aug 2012 A1
20120216243 Gill Aug 2012 A1
20120222083 Vaha-Sipila Aug 2012 A1
20120224057 Gill Sep 2012 A1
20120233601 Gounares Sep 2012 A1
20120246484 Blaisdell Sep 2012 A1
20120251073 Innocenti Oct 2012 A1
20120290576 Amorim Nov 2012 A1
20120297380 Colbert et al. Nov 2012 A1
20120311071 Karaffa Dec 2012 A1
20120317627 Chandrashekhar Dec 2012 A1
20120330786 Paleja Dec 2012 A1
20130055401 Kim Feb 2013 A1
20130086376 Haynes Apr 2013 A1
20130086685 Haynes Apr 2013 A1
20130086686 Pistoia Apr 2013 A1
20130086688 Patel Apr 2013 A1
20130091541 Beskrovny Apr 2013 A1
20130091578 Bisht Apr 2013 A1
20130097659 Das Apr 2013 A1
20130097660 Das Apr 2013 A1
20130097662 Pearcy Apr 2013 A1
20130111546 Gargiulo May 2013 A1
20130111548 Kanoun May 2013 A1
20130111592 Zhu May 2013 A1
20130132942 Wang May 2013 A1
20130160120 Malaviya Jun 2013 A1
20130160141 Tseng Jun 2013 A1
20130174246 Schrecker Jul 2013 A1
20130179991 White Jul 2013 A1
20130185804 Biswas Jul 2013 A1
20130185806 Hatakeyama Jul 2013 A1
20130219493 Banzhof Aug 2013 A1
20130227524 Im et al. Aug 2013 A1
20130227683 Bettini Aug 2013 A1
20130227697 Zandani Aug 2013 A1
20130239167 Sreenivas Sep 2013 A1
20130239168 Sreenivas Sep 2013 A1
20130246370 Bartram Sep 2013 A1
20130247136 Chieu Sep 2013 A1
20130254833 Nicodemus Sep 2013 A1
20130254889 Stuntebeck Sep 2013 A1
20130263289 Vijayan Oct 2013 A1
20130268994 Cooper Oct 2013 A1
20130268997 Clancy, III Oct 2013 A1
20130283336 Macy Oct 2013 A1
20130287208 Chong et al. Oct 2013 A1
20130291106 Simonoff Oct 2013 A1
20130312103 Brumley Nov 2013 A1
20130326499 Mowatt Dec 2013 A1
20130329968 Ranjan Dec 2013 A1
20130333033 Khesin Dec 2013 A1
20130347094 Bettini Dec 2013 A1
20130347111 Karta Dec 2013 A1
20130347118 Yi Dec 2013 A1
20140032759 Barton Jan 2014 A1
20140047545 Sidagni Feb 2014 A1
20140047546 Sidagni Feb 2014 A1
20140068257 Burckard Mar 2014 A1
20140068776 Xiao et al. Mar 2014 A1
20140082748 Gomi Mar 2014 A1
20140109238 Ravindran Apr 2014 A1
20140137088 Mitchell May 2014 A1
20140157355 Clancy, III Jun 2014 A1
20140236624 Kim Aug 2014 A1
20140241348 Yadav Aug 2014 A1
20140245439 Day Aug 2014 A1
20140281511 Kaushik Sep 2014 A1
20140331317 Singh Nov 2014 A1
20140344061 Choi Nov 2014 A1
20140351571 Jacobs Nov 2014 A1
20140379664 Wiegenstein Dec 2014 A1
20150033339 Geffner Jan 2015 A1
20150040112 Valencia Feb 2015 A1
20150040229 Chan Feb 2015 A1
20150058166 Dion Feb 2015 A1
20150106384 Go et al. Apr 2015 A1
20150127829 Biswas May 2015 A1
20150143528 Johansson May 2015 A1
20150200959 Adam et al. Jul 2015 A1
20150207809 Macaulay Jul 2015 A1
20150212805 Zhu Jul 2015 A1
20160132694 Dhoolia et al. May 2016 A1
20170103215 Mahaffey Apr 2017 A1
20220012346 Jagad Jan 2022 A1
Non-Patent Literature Citations (13)
Entry
Caralli et al “Introducing Octave Allegro: Improviding the Information Security Risk Assessment Process,” May 2007, pp. 1-154 (Year: 2007).
Ray et al “Assembly and Disassembly,” p. 1, Jul. 11, 2010 retrieved from https://web.archive.org/web/20100711172121/https:/cs.lmu.edu/˜ray/notes/assemdisassem/ (Year: 2010).
Chen et al “Cloud Licensing Model for .NET Software Protection,” The 7th International Conference on Computer Science & Education (ICCSE 2012), Melbourne Australia, IEEE, pp. 1069-1074 (Year: 2012).
Falcarin et al “Remote Trust with Aspect-Oriented Programming,” Proceedings of the 20th International Conference on Advanced Information Networking and Applications (AINA '06), IEEE Computer Society, pp. 1-6 (Year: 2006).
Marosi et al “GenWrapper: A Generic Wrapper for Running Legacy Applications on Desktop Grids,” IEEE, pp. 1-9 (Year: 2009).
Zhao et al “An Agent Based Wrapper Mechanism Used in System Integration,” IEEE International Conference on e-Business Engineering, IEEE Computer Society, pp. 637-640 (Year: 2008).
Avdoshin et al “Software Risk Management,” IEEE, pp. 1-6 (Year: 2011).
Lobato et al “Risk Management in Software Product Lines: An Industrial Case Study,” IEEE, pp. 180-189 (Year: 2012).
Silva et al “Collaborative Risk Management in Software Projects,” 2012 Eighith International Conference on the Quality of Information and Communications Technology, IEEE, pp. 157-160 (Year: 2012).
Lv et al “Machine Learning Methods and Their Application Research, ” 20.
Alex Ruiz, Integrate an External Code Checker into Eclipse CDT, published Aug. 22, 2012, retrieved from http://www.ibm.com/developerworks/library/j-codan/.
Office Action mailed Sep. 20, 2016 for U.S. Appl. No. 15/013,231.
Office Action mailed Mar. 16, 2017 for U.S. Appl. No. 15/013,231.
Related Publications (1)
Number Date Country
20220012346 A1 Jan 2022 US
Provisional Applications (2)
Number Date Country
61943128 Feb 2014 US
61877623 Sep 2013 US
Continuations (1)
Number Date Country
Parent 14498115 Sep 2014 US
Child 17483177 US