Some of the most severe malware acts involve asset access and control by remote criminal operators, who gain the ability to command and control malware-infected computer assets remotely by the organizational asset connecting to a remote server. In this manner, access to sensitive data can be gained and, in some cases, sent to individuals or organizations outside of the network. In addition, the organizational asset can be used, unknown to the organization, to carry out criminal acts.
Organizations seeking to detect and respond to such threats and/or many other types of threats, must track and assess the risk to the organization of the infected assets, and thus the potential loss of information and/or other risks, on their network.
It should be noted that a network event can be defined as communication from an organizational asset intended to establish a connection to a server outside of the organization. More specifically, in one embodiment, a malicious network event can be defined as a network event performed by malware on an organization's asset. Observing a “malicious network event” can indicate that the organizational asset is infected with malware. Those of ordinary skill in the art will see that there are many ways to discover and identify a “malicious network event”. In one embodiment of the invention, a method and system can be provided to analyze attributes associated with or related to malicious network events from an organizational asset. In one embodiment, an attribute can be defined as forensic information collected during or related to the malicious network event. Attributes can be used to individually or collectively indicate a level of risk to an organization that has assets taking part in malicious network events.
In order to derive the risk associated with an asset participating in malicious network events on a network, in 105, evidence used to derive risk can be collected. The evidence can include, but is not limited to, malware related attributes and forensics.
In 110, an assessment of risk can be performed. This assessment can be based on, for example, evidence collected in 105. The evidence can include attributes (e.g., forensics) associated with or related to malicious network events, gathered using, for example, files that depict the actual malicious network event and/or the description of the malicious network event. The evidence can also include attributes, for example: an asset's activity within the network and/or changes to assets and their associated network activity due to malware; and/or asset activity relative to other assets within the network. In one embodiment, an asset may posses a high relative risk due to current malicious network events. However, its derived relative risk may lessen upon the introduction of assets into the network with malicious network events associated with higher risk.
In 115, assessed risk can be categorized, prioritized, or admonitioned, or any combination thereof. The method and system 100 admonishes risk through the use of alerts sent to a user of the method and system, through mechanisms such as, and not limited to, graphical user interface presentation of risk, syslog alerts, e-mail, Simple Network Management Protocol (SNMP) traps and/or pager events, according to one embodiment.
Referring again to
In the network configuration of
It should be noted that method 100 is not limited to calculating the risk based solely upon event attributes, but rather, may assess risk based upon any network activity associated with, but not confined to, an asset's communication with a server. In one embodiment, attributes collected as forensics can be used to calculate risk associated with internal assets.
As illustrated in
Asset Priority 350. A configurable priority set to specific assets, indicating their importance to an organization, expressed as a number in the 0-100 range, according to one embodiment. As an example, an asset of priority 100 may represent a mission-critical asset.
Bytes In 351. The total quantity of information observed to enter the asset, once a successful connection is established, expressed as a number in the 0-100 range, according to one embodiment. As an example, an asset with Bytes In of 100 may represent but is not limited to a high amount of instruction sets, commands, or repurposed malware (newer malware) delivered to the infected asset by a remote criminal operator.
Bytes Out 352. The total quantity of information observed to exit the asset, once a successful connection is established, expressed as a number in the 0-100 range, according to one embodiment. As an example, an asset with Bytes Out of 100 may represent but is not limited to the exfiltration of data such as personal identification information, trade secrets, proprietary or confidential data, or intellectual property to remote criminal operators as a form of data theft.
Number of Threats on Asset 353. The number of unique instances of active threats on the asset, expressed as a number in the 0-100 range, according to one embodiment. As an example, an asset with a Number of Threats of 100 would represent an asset that has a large number of infections and therefore a higher risk.
Number of Connection Attempts 354. The total number of times a connection has been attempted to/from the asset, regardless of success, according to one embodiment. As an example, an asset with a Connection Attempts of 100 would represent an asset who has active, frequent communication with at least one criminal operator and is thus an active threat.
Success of Connection Attempts 355. The percentage of times the connection attempts successfully connect and exchange data as part of a malicious network event, expressed as a number in the 0-100 range, according to one embodiment. As an example, an asset with Successful Connection Attempts of 100 would represent an asset who has successfully communicated with a remote criminal operator and thus exchanged communications.
Geo-Location of Connection Attempts 356. A configurable priority set to the specific geo-location based on the location of the IP address of connection attempts related to malicious network events, expressed as a number in the 0-100 range, according to one embodiment. As an example, a geo-location priority 100 may represent a connection attempt to an IP address located in a country designated to be high risk by the customer.
Network Type for Connection Attempt 357. A configurable priority set to specific network types, such as residential, commercial, government or other networks, as being higher risk for connection attempts related to malicious network events, expressed as a range 0-100 according to one embodiment. As an example, a network type of priority 100 may represent a network (e.g., residential) which customer data should not be connecting to.
Domain State: Active or Sinkholed 358. The identification of a domain as Active or Sinkholed related to a DNS query and/or subsequent connection attempt related to a malicious network event, expressed as a range of 0-100, according to one embodiment. As an example, a Domain State of 100 may represent an Active domain where a Domain State of 50 may represent a Sinkholed domain.
Domain Type: Paid or Free Dynamic DNS Domain 359. The identification of a domain as either a paid domain or a free dynamic DNS domain as part of a DNS query related to a malicious network event, expressed as a range of 0-100, according to one embodiment. As an example, a Domain Type of 100 may represent a free dynamic DNS domain where a Domain Type of 50 may represent a paid domain.
Number of Malicious Files 360. The total number of malicious files observed to go to an asset, expressed as a number in the 0-100 range, according to one embodiment. As an example, an asset with a Number of Malicious Files of 100 would represent an asset that is actively receiving new malware or repurposed malware to infect or re-infect the asset to either evade detection or to carry out new malicious events.
Payload 361. A priority (e.g., which may be configurable), set to the type of payload, such as but not limited to, obfuscated, encrypted, or plain text, observed during connection attempts related to malicious network events, expressed as a range 0-100, according to one embodiment. As an example, a Payload of 100 may represent an encrypted payload.
Marked Data 362. A configurable priority set for observed marked data, such as “Confidential” or “Proprietary”, observed during connection attempts related to malicious network events, expressed as a range 0-100 according to one embodiment. As an example, an asset with Marked Data of 100 would represent an asset that has been involved in exfiltration of confidential or proprietary data thus indicating data theft by a remote criminal operator.
Vulnerabilities 363. A configurable priority set to specific assets based on identified vulnerabilities on those assets, expressed as a range 0-100, according to one embodiment. As an example, a Vulnerability of 100 would indicate the asset being investigated has known vulnerabilities that could be used by the remote criminal operator to control the asset and exfiltrated data.
Confidence of Presence of Advanced Malware 364. A configurable priority set for specific assets based on the confidence the system has of the presence of advanced malware on the asset, expressed as a range 0-100, according to one embodiment. As an example, an asset with a Confidence of 100 would indicate a higher risk that data could be exfiltrated from a network.
It should be noted that the ranges described above are example ranges, and that many other ranges can be used.
It should also be noted that, in the local attribute list 321 in
AV Coverage 380. A percentage correlating the availability of an AV vendor's anti-virus/malware signature for specific known malware variants, according to one embodiment. As an example, the AV Coverage of 0 would indicate the referenced AV vendor has no coverage for the threat and as such it poses greater risk to the user and that the AV vendor will have a poor chance of assisting in remediation efforts.
Severity 381. For known threats related to malicious communications, a ranking can be based upon previously observed exploits to internal networks, expressed as a number in the 0-100 range, according to one embodiment. As an example, an asset with a threat that has Severity of 100 represents a high risk to the network based on prior experience about the threat in other networks.
It should be noted that many other ranking schemes can be utilized. It should also be noted that embodiments of the invention are not limited to tracking only the aforementioned local attributes 321 and global threat attributes 322. Due to the ever-changing nature of risk, risk can be continually assessed and prioritized, and additional or different attributes can be tracked and added as needed. The example in
For example, the number of connection attempts 354 attribute can represent a malware-compromised asset's attempt at reaching an external entity. Although this behavior contains associated risk, the magnitude of the risk may be linear with increased attempts and considered far less severe with frequency than that of an asset that has successfully connected to a server, and has received information and commands to execute, along with data to transmit, represented by the bytes in and bytes out attributes, with the severity of the risk increasing exponentially with the amount of information received and sent. Transforms B and C can use a different scale, such as one that is logarithmic in nature, when considering how to transform the bytes in/bytes out attribute risk and assign risk accordingly. Independent risks A-O and α-β can thus be calculated for every attribute, according to one embodiment, as follows:
Risk A—Asset Priority. The asset priority risk can be a number in the 1-5 range assigned by the user to an asset or group of assets, with 1 representing a high-priority asset, and 5, a low priority asset. The number assigned can be compared against a set of preselected ranges, and the risk associated with the ranges can then be assigned to the asset(s). As an example, when a user sets an asset to category priority 5, the risk assigned to the asset can be set to 10; priority 1 assets, conversely, could have an assigned risk of 100.
Risk B—Bytes In. This can provide a log distribution of infected assets based on the amount of data transferred from the server to the assets. The log scale can be centered on the asset whose data in is the median of the distribution. The contribution for the bytes in risk can be increased logarithmically as bytes in scores exceed the median. As an example, if the median Bytes In for infected assets inside a network is 100 Kb, and asset A initially had 90 Kb of Bytes In but now has 120 Kb of Bytes In, then asset A's risk has surpassed the median and is now of substantially higher risk to an organization.
Risk C—Bytes Out. This can provide a log distribution of infected assets based on the amount of data transferred to the server from the assets. The log scale can be centered on the asset whose data in is the median of the distribution. The contribution for the bytes out risk can be increased logarithmically as bytes out scores exceed the median. As an example, if the median Bytes Out for infected assets inside a network is 100 Kb, and asset A initially had 90 Kb of Bytes Out but now has 120 Kb of Bytes Out, then asset A's risk has surpassed the median and is now of substantially higher risk to an organization.
Risk D—Number of Threats on Asset. This can be a number calculated according to the total number of threats present on an asset. The presented threat counts can be compared with preselected ranges that have an attributed risk weight associated with them. As an example, if the threat count presented is 3 or more, the highest attributed risk weight of 100 can be assigned as the number of threats on that particular asset.
Risk E—Connection Attempts. This can provide a log distribution of infected assets based on the number of connections to the server from the assets. The log scale can be centered on the asset whose data in is the median of the distribution. The contribution for the connection attempts risk can be increased logarithmically as connection attempt scores exceed the median. As an example, if the median Connection Attempts for infected assets inside a network is 100, and asset A initially had 90 Connection Attempts but now has 120 Connection Attempts, then asset A's risk has surpassed the median and is now of substantially higher risk to an organization.
Risk F—Success of Connection Attempts. This can be a number calculated according to the success rate of the total connection attempts made by an asset related to malicious network events. A connection attempt may be defined as successful upon the delivery or receipt of data from a malicious network event. The presented success rate can be compared with preselected ranges that have an attributed risk weight associated with them. As an example, if the success rate is greater than 80%, the highest attributed risk weight of 100 can be assigned as the number of successful connection attempts.
Risk G—Geo-Location. The geo-location can be a number in the 1-5 range assigned by the user to specific geographic locations for connection attempts, with 1 representing a high-priority geo-location, and 5, a low-priority geo-location. The number assigned can be compared against a set of preselected ranges, and the risk associated with the ranges can be assigned to the asset(s). As an example, when a user sets a geo-location to priority 5, the risk assigned to the asset can be set to 10; priority 1 geo-locations conversely, could have an assigned risk of 100.
Risk H—Network Type. The network type can be a number in the 1-5 range assigned by the user to specific network types, with 1 representing high-priority network types, and 5 representing low-priority network types. The number assigned can be compared against a set of preselected ranges, and the risk associated with the ranges can be assigned to the asset(s). As an example, when a user sets a network type to priority 5, the risk assigned to the asset can be set to 10; a priority 1 network type conversely, could have an assigned risk of 100.
Risk I—Domain State. The domain state can be a number in the 1-5 range assigned by the user to specific domain states, with 1 representing the high-priority domain state, and 5, a low-priority domain states. The number assigned can be compared against a set of preselected ranges, and the risk associated with the ranges can be assigned to the asset(s). As an example, when a user sets a domain state to priority 5, the risk assigned to the asset can be set to 10; a priority 1 domain state conversely, could have an assigned risk of 100.
Risk J—Domain Type. The domain type can be a number in the 1-5 range assigned by the user to specific domain types, with 1 representing a high-priority domain type, and 5, a low-priority domain type. The number assigned can be compared against a set of preselected ranges, and the risk associated with the ranges can be assigned to the asset(s). As an example, when a user sets a domain type to priority 5, the risk assigned to the asset can be set to 10; a priority 1 domain type conversely, could have an assigned risk of 100.
Risk K—Malicious Files. This can be a number calculated according to the total number of Malicious Files delivered to an asset. The presented Malicious File counts can be compared with preselected ranges that have an attributed risk weight associated with them. As an example, if the Malicious File count presented is 3 or more, the highest attributed risk weight of 100 can be assigned as the number of Malicious Files delivered to a particular asset.
Risk L—Payload. The payload type can be a number in the 1-5 range assigned by the user to specific payloads, with 1 representing the high-priority payload type, and 5, a low-priority payload type. The number assigned can be compared against a set of preselected ranges, and the risk associated with the ranges can be assigned to the asset(s). As an example, when a user sets a payload type to priority 5, the risk assigned to the asset can be set to 10; a priority 1 payload type conversely, could have an assigned risk of 100.
Risk M—Marked Data. The marked data can be a number in the 1-5 range assigned by the user to specific marked data types, with 1 representing a high-priority marked data type, and 5, a low-priority marked data type. The number assigned can be compared against a set of preselected ranges, and the risk associated with the ranges can be assigned to the asset(s). As an example, when a user sets a marked data type to priority 5, the risk assigned to the asset can be set to 10; a priority 1 marked data type conversely, could have an assigned risk of 100.
Risk N—Vulnerabilities. A vulnerability can be a number in the 1-5 range assigned by the user to specific vulnerability types, with 1 representing a high-priority vulnerability, and 5 a low-priority vulnerability. The number assigned can be compared against a set of preselected ranges, and the risk associated with the ranges can be assigned to the asset(s). As an example, when a user sets a vulnerability type to priority 5, the risk assigned to the asset can be set to 10; a priority 1 vulnerability type conversely, could have an assigned risk of 10.
Risk α—AV Coverage. AV coverage risk can be an average of AV coverage for all threats on the asset. This can be only counted for the AV engine that a user has selected as their AV, a configurable option within one embodiment of the invention. The presented AV coverage number can correspond to preselected ranges that have an attributed risk weight associated with them. As an example, if an AV vendor's coverage is displayed as 90%, for the variants related to the threat, the lowest risk weight can be assigned to AV coverage's risk; conversely, an AV vendor displaying 0% for the same variants can have the highest risk weight assigned.
Risk β—Severity. A risk score can be calculated and set by the severity of a threat on an asset based on on knowledge of previously observed exploits and threats. This risk score can be delivered directly to the product, and can range from 0-100. As an example, if the Severity is 80 for a threat on an asset, then that asset has a lower risk than an asset with a threat Severity of 90.
It should be noted that the above risks A-O and α-β are only example risks and ranges, and that other risks and ranges and/or combinations of the risks and ranges above can be used instead of or in addition to the risks and ranges above.
In one embodiment, risks A-O and α-β can be aggregated into algorithm 330. The algorithm 330 can calculate composite risk 331, which can, in one embodiment, be a number derived through the weighted aggregation of risks A-O and α and β, as follows:
The overall asset risk factor can be made up of weighted factors, according to the following formula (with W representing Weight in the formula):
AV Coverage*W1|Normal|ZZMPTAGIINorma∥ZZMPTAG∥Normal|ZZMPTAG∥Normal|ZZMPTAG|
Severity Score*W2|Normal|ZZMPTAG∥Normal|ZZMPTAG∥Normal|ZZMPTAG∥Normal|ZZMPTAG|
Threat Count Score*W3|Normal|ZZMPTAG∥Normal|ZZMPTAG∥Normal|ZZMPTAG∥Normal|ZZMPTAG|
Priority Score*W4|Normal|ZZMPTAG∥Normal|ZZMPTAG∥Normal|ZZMPTAG∥Normal|ZZMPTAG|
Connection Attempt Score*W5|Normal|ZZMPTAG∥Normal|ZZMPTAG∥Normal|ZZMPTAG∥Normal|ZZMPTAG|
Bytes Out Score*W6|Normal|ZZMPTAG∥Normal|ZZMPTAG∥Normal|ZZMPTAG∥Normal|ZZMPTAG|
Bytes In Score*W7|Normal|ZZMPTAG∥Normal|ZZMPTAG∥Normal|ZZMPTAG∥Normal|ZZMPTAG|
Success of Connection Attempts Score*W8|Normal|ZZMPTAG∥Normal|ZZMPTAG∥Normal|ZZMPTAG∥Normal|ZZMPTAG|
Geo-Location Score*W9|Normal|ZZMPTAG∥Normal|ZZMPTAG∥Normal|ZZMPTAG∥Normal|ZZMPTAG|
Network Type Score*W10|Normal|ZZMPTAG∥Normal|ZZMPTAG∥Normal|ZZMPTAG∥Normal|ZZMPTAG|
Domain State Score*W11|Normal|ZZMPTAG∥Normal|ZZMPTAG∥Normal|ZZMPTAG∥Normal|ZZMPTAG|
Domain Type Score*W12|Normal|ZZMPTAG∥Normal|ZZMPTAG∥Normal|ZZMPTAG∥Normal|ZZMPTAG|
Malicious Files Score*W13|Normal|ZZMPTAG∥Normal|ZZMPTAG∥Normal|ZZMPTAG∥Normal|ZZMPTAG|
Payload Score*W14|Normal|ZZMPTAG∥Normal|ZZMPTAG∥Normal|ZZMPTAG∥Normal|ZZMPTAG|
Marked Data Score*W15|Normal|ZZMPTAG∥Normal|ZZMPTAG∥Normal|ZZMPTAG∥Normal|ZZMPTAG|
Vulnerabilities Score*W16|Normal|ZZMPTAG∥Normal|ZZMPTAG∥Normal|ZZMPTAG∥Normal|ZZMPTAG|
The final risk score calculation can be an average of the weighted independent risks A-O and α-β. As an example, a set of assets will have different Composite Risk scores based on the aggregation and calculations of each asset's individual risks A-O and a-ft Therefore, an asset with low individual risks A-O and α-β will have a lower Composite Risk score than an asset with high individual risks A-O and α-β. However, some individual risk scores may contribute more than other individual risk scores to an asset's Composite Risk score.
The output can be the asset risk factor score. This number can represent the relative risk of an asset in reference to other assets on the network, a relative distribution 332, and as such does not represent a comparison against an absolute value of risk, according to one embodiment. It should be noted that many other algorithms can be use to compute the asset risk factor score. Algorithm 330 in
Table 340 in
Composite risk scores ascertained via Algorithm 330 in
Example 480 in
The profiler 495 illustration in
Asset Name. Either the asset's network name or its IP address.|Normal|ZZMPTAG∥Normal|ZZMPTAG∥Normal|ZZMPTAG∥Normal|ZZMPTAG|
Connection Attempts. Total amount of times an asset attempted to communicate with an external entity, regardless of success.|Normal|ZZMPTAG∥Normal|ZZMPTAG∥Normal|ZZMPTAG∥Normal|ZZMPTAG|
Operator Names. Arbitrary name assigned to an identified threat.|Normal|ZZMPTAG∥Normal|ZZMPTAG∥Normal|ZZMPTAG∥Normal|ZZMPTAG|
Industry Names. Name assigned by industry threat analysis vendors to the identified threat.|Normal|ZZMPTAG∥Normal|ZZMPTAG∥Normal|ZZMPTAG∥Normal|ZZMPTAG|
First Seen. Time (e.g., in days) when the asset was first seen to communicate with an external entity.|Normal|ZZMPTAG∥Normal|ZZMPTAG∥Normal|ZZMPTAG∥Normal|ZZMPTAG|
Last Update Time (e.g., in days) when the asset was last seen to communicate with the external entity.|Normal|ZZMPTAG∥Normal|ZZMPTAG∥Normal|ZZMPTAG∥Normal|ZZMPTAG|
Category User defined priority assigned to the asset.|Normal|ZZMPTAG∥Normal|ZZMPTAG∥Normal|ZZMPTAG∥Normal|ZZMPTAG|
Tags Subdivisions of the categories/priorities used to further segregate assets in a network.|Normal|ZZMPTAG∥Normal|ZZMPTAG∥Normal|ZZMPTAG∥Normal|ZZMPTAG|
The screen shot of
615 Convicted Asset Status. A pie chart depicting the total number of assets that have engaged in communication to unknown external entities, displayed as suspicious (e.g., possible communication) Or convicted (e.g., definite communication).|Normal|ZZMPTAG∥Normal|ZZMPTAG∥Normal|ZZMPTAG∥Normal|ZZMPTAG|
620 Asset Category. A pie chart depicting the total number of assets that have engaged in communication to unknown external entities, displayed according to category, filtered by suspicious (e.g., possible communication) or convicted (e.g., definite communication).|Normal|ZZMPTAG∥Normal|ZZMPTAG∥Normal|ZZMPTAG∥Normal|ZZMPTAG|
635 Connection Summary. A bar graph depicting the total number of connections attempted by internal assets to external unknown entities, whether initiated, successful, failed or dropped.|Normal|ZZMPTAG∥Normal|ZZMPTAG∥Normal|ZZMPTAG∥Normal|ZZMPTAG|
640 Suspicious Executables Identified. A bar graph depicting the total number of unidentified executable programs downloaded in the network, filtered by submitted (e.g., by users) or un-submitted status.|Normal|ZZMPTAG∥Normal|ZZMPTAG∥Normal|ZZMPTAG∥Normal|ZZMPTAG|
625 Communication Activity. A bar graph depicting asset communication to known external threats, filtered by data (e.g., bytes) into and out of, the network.|Normal|ZZMPTAG∥Normal|ZZMPTAG∥Normal|ZZMPTAG∥Normal|ZZMPTAG|
645 Connection Attempts. A bar graph depicting information contained in 635 connection summary, according to specific dates.|Normal|ZZMPTAG∥Normal|ZZMPTAG∥Normal|ZZMPTAG∥Normal|ZZMPTAG|
630 Asset Conviction Trend. A stacked marked line chart depicting information contained in 615 convicted asset status, according to a specific timeline.|Normal|ZZMPTAG∥Normal|ZZMPTAG∥Normal|ZZMPTAG∥Normal|ZZMPTAG|
650 Daily Asset Conviction. A stacked marked line chart depicting information contained in 615 convicted asset status, according to a single day.|Normal|ZZMPTAG∥Normal|ZZMPTAG∥Normal|ZZMPTAG∥Normal|ZZMPTAG|
655 Daily Botnet Presence. A stacked marked line chart depicting information pertaining to specific identified threats, with a user-defined date range.|Normal|ZZMPTAG∥Normal|ZZMPTAG∥Normal|ZZMPTAG∥Normal|ZZMPTAG|
While various embodiments of the present invention have been described above, it should be understood that they have been presented by way of example, and not limitation. It will be apparent to persons skilled in the relevant art(s) that various changes in the form and detail can be made therein without departing from the spirit and scope of the present invention. Thus, the invention should not be limited by any of the abovedescribed exemplary embodiments.
In addition, it should be understood that the figures described above, which highlight the functionality and advantages of the present invention, are presented for example purposes only. The architecture of the present invention is sufficiently flexible and configurable, such that it may be utilized in ways other than that shown in the figures.
Further, the purpose of the Abstract of the Disclosure is to enable the U.S. Patent and Trademark Office and the public generally, and especially the scientists, engineers and practitioners in the art who are not familiar with patent or legal terms or phraseology, to determine quickly from cursory inspection the nature and essence of the technical disclosure of the application. The Abstract of the Disclosure is not intended to be limiting as to the scope of the present invention in any way.
It should also be noted that the terms “a”, “an”, “the”, “said”, etc. signify “at least one” or “the at least one” in the specification, claims and drawings. In addition, the term “comprising” signifies “including, but not limited to”.
Finally, it is the applicant's intent that only claims that include the express language “means for” or “step for” be interpreted under 35 U.S.C. 112, paragraph 6. Claims that do not expressly include the phrase “means for” or “step for” are not to be interpreted under 35 U.S.C. 112, paragraph 6.
This application is a continuation of U.S. patent application Ser. No. 13/309,202, which claims the benefit of U.S. Provisional Patent Application No. 61/420,182, filed Dec. 6, 2010. All of the foregoing are incorporated by reference in their entireties.
Number | Date | Country | |
---|---|---|---|
61420182 | Dec 2010 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 13309202 | Dec 2011 | US |
Child | 14616387 | US |