This application is related to cloud-platform security and, more specifically, detecting reconnaissance and infiltration in data lakes and cloud warehouses.
Cyber-attacks on enterprise data can happen at any time. Data is the most critical asset of any enterprise. Almost all cybersecurity tools and techniques invented and deployed to date focus on protecting the data by proxy. They either focus on protecting the server/application or the endpoints (e.g. desktop, laptop, mobile, etc.) and by proxy, assume the data is protected. The paradox in the cybersecurity industry is, data breaches are growing and measured by any metric with every passing day, despite more money and resources being deployed into cybersecurity solutions, so clearly existing approaches are failing, begging for a new solution.
In one aspect, a computerized method for detecting reconnaissance and infiltration in data lakes and cloud warehouses, comprising: monitoring a SaaS data store or a cloud-native data store from inside the data store; examining the attack and automatically identifies how far the attack has progressed in the attack lifecycle; identifying the target and scope of the attack evaluates how far the attackers have penetrated the system and what is their target; and establishing the value of the asset subject to the attackers' attack and maps the impact of the attack on the CIA (confidentiality, integrity and availability) triad.
In another aspect, a computerized method for implementing a SaaS data store and data lake house cybersecurity hygiene posture analysis: automatically analyzing and checking an entity's SaaS data lakes and warehouses for a set of cybersecurity weaknesses that are exploitable by an attacker; based on the analyzing and checking, determining a set of cybersecurity weakness in the entity's SaaS data lakes and warehouse; ranking the cybersecurity weaknesses based on a data at risk value, wherein to determine the data at risk value; classifying a content of the data in the entity's SaaS data lakes and warehouses; calculating a preventative cybersecurity grade for the entity's SaaS data lakes and warehouses; automatically detecting any data stores in the entity's SaaS data lakes and warehouses that have data stored that have been copied from another primary data repository and have a different security posture; automatically detecting any data stores in the entity's SaaS data lakes and warehouses that have data stored that have not been accessed in a specified period; and tracking and classify a cyberattack and places the cyberattack in one of n-number stages.
The Figures described above are a representative set and are not exhaustive with respect to embodying the invention.
Disclosed are a system, method, and article for detecting reconnaissance and infiltration in data lakes and cloud warehouses. Descriptions of specific devices, techniques, and applications are provided only as examples. Various modifications to the examples described herein can be readily apparent to those of ordinary skill in the art, and the general principles defined herein may be applied to other examples and applications without departing from the spirit and scope of the various embodiments.
Reference throughout this specification to ‘one embodiment,’ ‘an embodiment,’ ‘one example,’ or similar language means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present invention. Thus, appearances of the phrases ‘in one embodiment,’ ‘in an embodiment,’ and similar language throughout this specification may, but do not necessarily, all refer to the same embodiment.
Furthermore, the described features, structures, or characteristics of the invention may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided, such as examples of programming, software modules, user selections, network transactions, database queries, database structures, hardware modules, hardware circuits, hardware chips, etc., to provide a thorough understanding of embodiments of the invention. However, one skilled in the relevant art can recognize that the invention may be practiced without one or more of the specific details or with other methods, components, materials, and so forth. In other instances, well-known structures, materials, or operations are not shown or described in detail to avoid obscuring aspects of the invention.
The schematic flow chart diagrams included herein are generally set forth as logical flow chart diagrams. As such, the depicted order and labeled steps are indicative of one embodiment of the presented method. Other steps and methods may be conceived that are equivalent in function, logic, or effect to one or more steps, or portions thereof, of the illustrated method. Additionally, the format and symbols employed are provided to explain the logical steps of the method and are understood not to limit the scope of the method. Although various arrow types and line types may be employed in the flow chart diagrams, they are understood not to limit the scope of the corresponding method. Indeed, some arrows or other connectors may be used to indicate only the logical flow of the method. For instance, an arrow may indicate a waiting or monitoring period of unspecified duration between enumerated steps of the depicted method. Additionally, the order in which a particular method occurs may or may not strictly adhere to the order of the corresponding steps shown.
Example definitions for some embodiments are now provided.
Application programming interface (API) can be a computing interface that defines interactions between multiple software intermediaries. An API can define the types of calls and/or requests that can be made, how to make them, the data formats that should be used, the conventions to follow, etc. An API can also provide extension mechanisms so that users can extend existing functionality in various ways and to varying degrees.
CIA triad (confidentiality, integrity and availability) of information security.
Cloud computing is the on-demand availability of computer system resources, especially data storage (e.g. cloud storage) and computing power, without direct active management by the user.
Cloud database is a database that typically runs on a cloud computing platform and access to the database is provided as-a-service.
Cloud storage is a model of computer data storage in which the digital data is stored in logical pools, said to be on “the cloud”. The physical storage spans multiple servers (e.g. in multiple locations), and the physical environment is typically owned and managed by a hosting company. These cloud storage providers can keep the data available and accessible, and the physical environment secured, protected, and running.
Cloud data warehouse is a cloud-based data warehouse. Cloud data warehouse can be used for storing and managing large amounts of data in a public cloud. Cloud data warehouse can enable quick access and use of an entity's data.
Dark web is the World Wide Web content that exists on darknets: overlay networks that use the Internet but require specific software, configurations, or authorization to access. Through the dark web, private computer networks can communicate and conduct business anonymously without divulging identifying information, such as a user's location.
DBaaS (Database as a Service) can be a cloud computing service that provides access to and use a cloud database system.
Data lake is a system or repository of data stored in its natural/raw format. A data lake can be object blobs or files. A data lake is usually a single store of data including raw copies of source system data, sensor data, social data etc. A data lake can include various transformed data used for tasks such as reporting, visualization, advanced analytics, and machine learning. A data lake can include structured data from relational databases (rows and columns), semi-structured data (e.g. CSV, logs, XML, JSON), unstructured data (e.g. emails, documents, PDFs) and binary data (e.g. images, audio, video). A data lake can be established “on premises” (e.g. within an organization's data centers) or “in the cloud” (e.g. using cloud services from various vendors).
Malware is any software intentionally designed to disrupt a computer, server, client, or computer network, leak private information, gain unauthorized access to information or systems, deprive access to information, or which unknowingly interferes with the user's computer security and privacy. Researchers tend to classify malware into one or more sub-types (e.g. computer viruses, worms, Trojan horses, ransomware, spyware, adware, rogue software, wiper and keyloggers).
Privilege escalation can be the act of exploiting a bug, a design flaw, or a configuration oversight in an operating system or software application to gain elevated access to resources that are normally protected from an application or user. The result can be that an application with more privileges than intended by the application developer or system administrator can perform unauthorized actions.
Tactics, techniques, and procedures (TTPs) are the “patterns of activities or methods associated with a specific threat actor or group of threat actors.
In step 104, process 100 protects the data stored from inside functions like human antibodies detecting infection and protecting the human body. Antibodies do a better job of protecting the body from viruses or foreign objects than externally administered drugs. This was brought to the general public's consciousness during the Covid pandemic. People with well-tailored or well-boosted immune systems handled the covid virus better. Similarly, an intrusion detection and protection system embedded into the data store and is well dispersed in the data stores watches every interaction with the data and identifies what's normal for what type of role and user with what type of data. The moment the malware or malicious human tries to access or manipulate data or the structure of the data or policies/governance constructs surrounding the data, the system in real-time swings into action with a very tailored plus measured response to neutralize the damage.
In step 106, process 100 examines the attack and automatically identifies how far the attack has progressed in the attack lifecycle (e.g. attack kill chain). This is important as it gives a sense of response time available to the humans protecting the system when operating cyber-attack defenses in a manual or hybrid mode. It can provide information about how long the enterprise have left before the damage from the attackers is permanent.
In step 108, process 100 identifies the target and scope of the attack. This further enables the human operators to prioritize the attack response because process 100 assesses the impact of the attack and the ramifications of the attack. Process 100 auto-classifies the data inside the data stores and establishes the financial value of the data by using pricing signals from the data bounties in the dark web. It establishes the minimum dollar value of the data. Further, by examining the sequence of attack steps/commands being executed, process 100 establishes the target of the attack, the scope, and how the data may be compromised (e.g. confidentiality (e.g. data leak) and/or integrity (e.g. data maliciously be manipulated) and/or availability (e.g. ransomware) of the data being compromised)).
In step 110, process 100 evaluates how far the attackers have penetrated the system and what is their target, establishes the value of the asset subject to the attackers' attack, and maps the impact of the attack on the CIA triad. This enables security practitioners to look at the attack as it builds out and take the remediation action with full view.
In step 202, process 200 learns from every data store in which it is. Process 200 learns what users and roles do with what type of data. Process 200 adapts to every environment it is placed in. It can identify failed logins and unusual associated activity by looking at the volume of such failed attempts with factors like the user's location and time of day. Failed logins from machines and users are both tracked. Failed login attempts done by processes that mimic a malware's execution/scenario attempts are also profiled and graded.
In step 204, process 200 delivers a unified data protection system against all forms of data attacks. Process 200 provides a solution that covers the entire spectrum from malicious or accidental insider attacks (e.g. phished user attacks), advanced persistent threats to automated supply chain attacks where malware exploits vulnerabilities in trusted code and gains access to trusted systems. The impact of how far the attack has progressed and the financial damage that can be caused is also quantified.
In step 206, process 200 reviews/searches for various typical infiltration signals, the trail the attacker leaves behind based on other TTPs and honey pot data. These include, inter alia: notification changes coupled with monitoring disablement or monitoring config changes; privilege escalation within the context of a data lake; etc. This can include additional privilege grants that process 200 tracks within the data lake.
If a user assumes a group with admin privileges and this type has been a read-only user, process 200 tracks such behavior. Any new groups that are created with higher admin privileges during diff times of the day or by a user who also did notification changes. Process 200 can create or alter objects inside the data lake, including security objects. Process 200 can alter and/or create attributes that impact security or access. These are when the user's behavior or role has not been part of such changes in the environment. Process 200 can change and/or alter security integrations. Process 200 can create new session policies and modify idle timeouts of existing sessions. Process 200 can manage anonymous UDF calls or API calls. In each of the above listed, the combination of the events adds to creating a stronger signal of reconnaissance and infiltration.
In step 208, process 200 can also fingerprint and identify the attackers. Process 200 uses the well-known technique of attacker classification by examining the Tactics, Technique, and Procedure (UP) of the attack sequence. The fingerprinting further helps identify reconnaissance and infiltration signals within the data lake.
In step 210, process 200 automatically calculates an overall grade for the company's preventative security health (e.g. security hygiene). The grade is calculated across all the company's data assets in the cloud and SaaS data stores. It informs the cybersecurity executive team how well the company is doing in keeping its security hygiene posture up. A good posture means fewer escalations and panic events. It also means companies can drive down their cyber insurance premiums. They also know what assets need more protection, focus, etc. Process 200 helps management get a bird's eye view of their investments and any alignment needed. Process 200 gives them an overall grade of how well they are doing versus their peers as well.
In step 302, process 300 automatically analyzes and check an entity's SaaS data lakes/warehouse for a set of cybersecurity weaknesses that may be exploited in the future by an attacker. Process 300 ranks the weaknesses it finds based on the data at risk. To evaluate the data at risk, process 300 classifies the content of the data (supports both structured and unstructured data). Process 300 classification uses a set of natural language processing engines to work on the data and identify the set of entity types present in each unit (e.g. cells, columns, rows, files, objects, tables, database, etc.) of the data. Process 300 puts a dollar value on the data by cross-checking the $value bad actors are willing to pay for entity types, by looking at the dark web marketplace. Process 300 allows customers to find the pricing model for entity types they care about. Process 300 uses a combination of entity criticality, asset $ value, and ease of exploitability of the issue to automatically prioritize issues for the security teams to address.
The key novelty here is process 300 uses the content of the data, the context of the data, and the context of the identity accessing the data to find security issues, use the data content, and how easy it is for the attacker to exploit the risk to prioritize what's important, how important it is, and then get the users to prioritize their limited resources on the most important things to get the highest value for their investment. All this is automated, and no security system has built an end-to-end automated workflow starting from understanding the data of the enterprise and driving prioritized issue resolution, all done in real time, with no data leaving the customer's jurisdiction.
Further, process 300 calculates the overall preventative cybersecurity hygiene (e.g. posture) score in real-time. And keeps track of the score over time. Executives like CISOs (Chief Information Security Officers)/CIOs (Chief Information Officers) especially like this feature of process 300 as this gives them a good bird's eye view of their security posture. Additionally, process 300 can then answer this key question “How secure is my company's data? What is my company's security grade or report card?”
In step 304, process 300 can calculate a preventative cybersecurity (e.g. posture) grade. The following equation can be utilized by way of example:
x=1−[(100*(CHRh/CH+CHRM/CH+CHRL/CH)+10*(CMRh/CM+CMRM/CM+CMRL/CM)+(CLRH/CL+CLRM/CL+CLRL/CL))/111]
C is either Cardinality of Entities associated with a CategoryHigh|Medium|Low OR The Sum of the financial value based $ for the Entities in High|Medium|Low.
In one example, the default option can be cardinality.
The user can toggle a button on the user interface to get either the cardinality or the asset value.
The default grading formula based on Cardinality is:
X=[
(Cardinality of High Entities with Severity 1 issues/Cardinality of all High Entities+Cardinality of High Entities with Severity 2 issues/Cardinality of all High Entities+Cardinality of High Entities with Severity 3 issues/Cardinality of all High Entities+Cardinality of High Entities with Severity 4/Cardinality of all High Entities)*100+
(Cardinality of Medium Entities with Severity 1 issues/Cardinality of all Medium Entities+Cardinality of Medium Entities with Severity 2 issues/Cardinality of all Medium Entities+Cardinality of Medium Entities with Severity 3 issues/Cardinality of all Medium Entities+Cardinality of Medium Entities with Severity 4/Cardinality of all Medium Entities)*10+
(Cardinality of Low Entities with Severity 1 issues/Cardinality of all Low Entities+Cardinality of Low Entities with Severity 2 issues/Cardinality of all Low Entities+Cardinality of Low Entities with Severity 3 issues/Cardinality of all Low Entities+Cardinality of Low Entities with Severity 4/Cardinality of all Low Entities)*1
]/111
The default grading formula based on the $ value is similar to the cardinality formula. Replace cardinality with $ value for the entities.
Grade Assignment:
Grade=A+ if 0.97<=X<=1
Grade=A+ if 0.93<=X<=0.96
Grade=A− if 0.9<=X<=0.92
Grade=B+ if 0.87<=X<=0.89
Grade=B if 0.83<=X<=0.86
Grade=B− if 0.8<=X<=0.82
Grade=C+ if 0.77<=X<=0.79
Grade=C if 0.73<=X<=0.76
Grade=C− if 0.70<=X<=0.72
Grade=D+ if 0.67<=X<=0.69
Grade=D if 0.65<=X<=0.66
Grade=D− if 0.65<X
These equations are provided by way of example and not of limitation. Examples of some of the built-in preventative cybersecurity insights process 300 (and/or the other systems and methods provided herein) delivers for its customers using a combination of machine learning and security analysis on the data and access identity for SaaS Data Stores. These are by no means exhaustive. Besides the built-in insight engines, process 300 (and/or the other systems and methods provided herein) enables the end customers' governance and data assurance teams to define their own custom insight engines.
In step 310, process 300 implements data lake and warehouse intrusion detection.
As shown in the
Using all this information, process 300 automatically computes how much over-provisioned an access role or access user is for the most granular unit of data. In the case of data lakes and data warehouses, the most granular unit of data is a data store table or a column inside a table inside a database. As shown in the figure above. The percentage shows how over-provisioned access to data is. In other words how many users or machines can access the data that have no business accessing that data? An over-provisioned percentage of 0% is ideal, which means only the people or machines which need access at any point in time have access, and no one else. This is the best security preventative security posture to run an organization with. But it's impossible to do with human-driven systems or the state of art present in the industry. Process 300 delivers this automatically for the entity undergoing analysis (e.g. a customer, etc.).
To get to the overprovisioned percentage, Process 300 looks at two identity access constructs. The role and the user/machine identity. Process 300 evaluates how to optimally determine every role to every table/column inside the database/data lake/warehouse configured. It analyzes all the privileges granted to the role of the particular table and column. Next, process 300 studies the user-to-role or user-to-attribute relationship. Studies which roles and attributes are superfluous and automate the pruning of those extra grants, to give the customer the best possible security access posture for their data at any point in time.
To summarize, what the user or enterprise customer gets from process 300 is a system that automatically delivers the principle of least privilege (e.g. need-to-know basis) without breaking any application or workflow. This automatically inserts “bulkhead” walls between the different compartments of data inside a data store like Snowflake, just like submarines or ships have bulkheads to prevent flooding in one section filling up other areas of the ship or submarine and resulting in its sinking. Process 300 does this automatically inside a data store like Snowflake, limiting the damage an attacker can do inside the database even if he or she were to break in. Additionally, process 300 finds the most granular and optimal “bulkhead wall” placement and keeps updating them over time, something that is not done in the real world like in submarines or ships. Process 300 uses artificial intelligence and data analysis to deliver the above outcomes.
Machine learning (ML) can use statistical techniques to give computers the ability to learn and progressively improve performance on a specific task with data, without being explicitly programmed. Deep learning is a family of machine learning methods based on learning data representations. Learning can be supervised, semi-supervised or unsupervised. Machine learning is a type of artificial intelligence (AI) that provides computers with the ability to learn without being explicitly programmed. Machine learning focuses on the development of computer programs that can teach themselves to grow and change when exposed to new data. Example machine learning techniques that can be used herein include, inter alia: decision tree learning, association rule learning, artificial neural networks, inductive logic programming, support vector machines, clustering, Bayesian networks, reinforcement learning, representation learning, similarity, and metric learning, and/or sparse dictionary learning.
Machine learning is a type of artificial intelligence (AI) that provides computers with the ability to learn without being explicitly programmed. Machine learning focuses on the development of computer programs that can teach themselves to grow and change when exposed to new data. Example machine learning techniques that can be used herein include, inter alia: decision tree learning, association rule learning, artificial neural networks, inductive logic programming, support vector machines, clustering, Bayesian networks, reinforcement learning, representation learning, similarity, and metric learning, and/or sparse dictionary learning. Random forests (RF) (e.g. random decision forests) are an ensemble learning method for classification, regression, and other tasks, that operate by constructing a multitude of decision trees at training time and outputting the class that is the mode of the classes (e.g. classification) or mean prediction (e.g. regression) of the individual trees. RFs can correct for decision trees' habit of overfitting to their training set. Deep learning is a family of machine learning methods based on learning data representations. Learning can be supervised, semi-supervised or unsupervised.
Process 300 identifies both human or machine-based probing attempts (and/or attacks). Process 300 classifies their geo-location, IP address, ASN, etc. Process 300 also identifies where the probes are coming from. This data can be compared with normal behavior.
Step 324 can provide an early warning heads-up to the defenders if any of the system parameters need to be tightened. This can be akin to a fighter jet detecting that it is having its radar painted, telling the pilot to take defensive and evasive maneuvers before the targeting and fire control systems can establish a lock. Similarly, process 300 uses the probes to determine whether there is undue or new interest in the data store and from where.
Process 300 also studies the type of probe failures to identify, which user's account is being used, and what type of probe failures are being picked up. This provides an early indicator of the set of TTPs the attacker may use. All of this feeds into the attack lifecycle analyzer and grading system.
In step 328, starts with making the entry, that is execution of the beachhead part of the attack. In this step, the attacker successfully gains a toehold inside the data store. After execution the attackers or the malware immediately try to establish persistence. Though persistence is not always required, this step is considered essential by most attackers or malware so that they can get back into the system if an unforeseen incident happens, like the Snowflake service is restarted/relocated or the network connection gets dropped.
In some examples, the account with which the attacker or malware enters the entity's computer system(s), often may not be a highly privileged account and/or the entry account may not have the right privileges to get to the data the attacker has an interest in (e.g. assuming the attacker knows what they are after from the very beginning, sometimes this step can occur after Data Intelligence collection). In cases like this, the attacker and/or malware has to execute a privilege elevation attack through one or more intermediate accounts or grant itself the right privileges till it gets sufficient privileges to get to the data of interest. This step is called privilege escalation.
Example Grade Calculations
The following grade calculation information can be utilized herein (e.g. to generate information associated with screenshot 1500, etc.).
Normalizer=1/Stage Event Count Historical Max
/*Comment: saturate it or put smart seed defaults for each stage. e.g. login attempts fails start with 100 a day*/
Stage numeric grade=1−[stage event count*normalizer]
The amplification weight for each stage is defined as the inverse inclusion probability. Use that to scale. See the figure above for the default amplification weights for each stage. The user can change the weights if needed.
Total numeric GPA=(Stage Weight Credit*Stage Grade numeric score)/(Total Stage Grade numeric score).
Conversion to the Letter GPA uses this table:
Letter/Grade Percentage Or decimal/GRADE NUMERIC SCORE
Prevalence Hash Functionalities
Screen shot 1600 illustrates an example hash computation implemented with a prevalence hash. Screen shots 1700-1800 illustrate an example screen shots showing global prevalence for all query types over an example last month and last one is global prevalence for select query types over the example last month. As shown, depending on the attack stage detection the appropriate features are used as discussed infra (e.g. with respect to attack sections). For example, example functions such as direct_tables_access and base_tables_access can be utilized.
Dashboard Functionalities
Attack Phase: Reconnaissance Attack Detection
Attack Phase 2: Infiltration
Execution and Persistence Phase
The following list of features can be abused by attackers to infiltrate. These can be utilized to detect infiltration. 1. Create/Alter Account Objects: API integration; connection; database; database/clone; network policy; notification integration; resource monitor; role; security integration; share; storage integration; user; warehouse. 2. Call/UDF: new procedure; anonymous procedure. 3. Create/Alter Database Objects: external function; external table; file format; file format/clone; function; masking policy; materialized view; password policy; pipe; procedure; row access policy; schema; schema/clone; sequence; sequence/clone; session policy; stage; stage/clone; stream; stream/clone; table; table/clone; tag; task; task/clone; view. 4. Create/Alter Security Integration: external OAUTH; OAUTH; SAML2; SCIM. 5. Execute Immediate: SQL; procedure call; control-flow; block. 6. Task: create; execute; alter.
Privilege Escalation
A privilege escalation attack can be a cyberattack designed to gain unauthorized privileged access into a system. Cyber attackers can attempt to exploit various human behaviors, gaps in operating systems or applications and/or system design flaws. Privilege Escalation operations can involve, inter alia: granting various state (e.g. ownership, roles, privileges to role, privileges to share, etc.); creation of session policies (e.g. idle timeout, idle timeout UI, etc.).
Other Screenshots
Additional Computing Systems
Additional Machine Learning Methods
In step 3706, process 3700 normalizes the features to detect attacks in any type of data lake or data warehouse. In step 3708, process 3700 trains and baselines the behavior of each database and table individually in every customer's environment. This ensures that models are personalized and tailored to each customer's environment, further the models are specific to the particular database of the customer. For example a test database of a customer may have a very different access baseline as compared to a CRM production database of the same customer. In step 3710, process 3700 learns a baseline per access (e.g. role and user) per data unit (database and table), this produces a high-fidelity attack detection.
The training period can include a lookback period (e.g. min ninety (90) days, six (6) months, etc.). Training versus predicting of the model is now discussed. Frist two/thirds of the lookback period can be used for training. The final third of the lookback period can be used for predicting. When a new data store or a new database is onboarded into the present invention, process 3700 may not have a baseline for the new datastore. Even though the customer may have been using process 3700 for other databases inside say Snowflake. So any predictions and detections process 3700 makes, may not have the fidelity customers may be used to. To address this scenario, process 3700 transparently learns a new baseline for every database, when it is onboarded. Always trigger training for that database. Process 3700, in one example, uses the last 90 days or longer of access and operational data for the datastore.
Learning Feedback can utilize reinforced learning methods. A false positive indication from the UI drives whitelisting (e.g. data relabeling). Feedback can be given per event (e.g. that is a row per tile in the UI).
Although the present embodiments have been described with reference to specific example embodiments, various modifications and changes can be made to these embodiments without departing from the broader spirit and scope of the various embodiments. For example, the various devices, modules, etc. described herein can be enabled and operated using hardware circuitry, firmware, software or any combination of hardware, firmware, and software (e.g., embodied in a machine-readable medium).
In addition, it can be appreciated that the various operations, processes, and methods disclosed herein can be embodied in a machine-readable medium and/or a machine-accessible medium compatible with a data processing system (e.g., a computer system), and can be performed in any order (e.g., including using means for achieving the various operations). Accordingly, the specification and drawings are to be regarded in an illustrative rather than a restrictive sense. In some embodiments, the machine-readable medium can be a non-transitory form of machine-readable medium.
This application claims priority to U.S. Provisional Application No. 63/439,579, filed on 18 Jan. 2023 and titled DATA STORE ANALYSIS METHODS AND SYSTEMS. This provisional application is hereby incorporated by reference in its entirety. This application claims priority to the U.S. patent application Ser. No. 17/335,932, filed on Jun. 1, 2021 and titled METHODS AND SYSTEMS FOR PREVENTION OF VENDOR DATA ABUSE. The U.S. patent application Ser. No. 17/335,932 is hereby incorporated by reference in its entirety. U.S. patent application Ser. No. 17/335,932 application claims priority to U.S. Provisional Patent Application No. 63/153,362, filed on 24 Feb. 2021 and titled DATA PRIVACY AND ZERO TRUST SECURITY CENTERED AROUND DATA AND ACCESS, ALONG WITH AUTOMATED POLICY GENERATION AND RISK ASSESSMENTS. This utility patent application is incorporated herein by reference in its entirety.
Number | Date | Country | |
---|---|---|---|
63439579 | Jan 2023 | US | |
63153362 | Feb 2021 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 17335932 | Jun 2021 | US |
Child | 18214527 | US |