The volume and variety of personal data being recorded and stored by organizations places personal privacy at risk. Regulations such as the Health Insurance Portability and Accountability Act (HIPAA) Privacy Rule, General Data Protection Regulation (GDPR) Recital 26, and European Medicine Agency (EMA) Rule 0070 have been adopted in an effort to protect the privacy of personal data. Each of these regulations require that to avoid restrictions upon sharing and reuse personal data be anonymized or pseudo-anonymized, such that the data cannot reasonably be associated with an individual. Satisfying anonymization requirements is challenging, particularly when considering how identity can be inferred using seemingly innocuous attributes such as birth year, postal code, gender, ethnicity, occupation, etc. These attributes, known as indirect identifiers or quasi-identifiers (QIDs), effectively allow a third party to determine a person's identity by process of elimination. In order to assess how likely an individual can be re-identified, previous techniques introduced the concept of k-anonymity, which characterizes the degree of protection of the data against such attacks. A dataset is said to have the property of k-anonymity (for the given value of k) where k is defined as the cardinality of the smallest group (or cohort) when the dataset is partitioned into distinct groups of records over all QID values. In some examples, datasets which exhibit k-anonymity can be referred to as k-anonymous. Intuitively, the larger the value of k, the more difficult it is to infer the person associated with a given record, as there are at least k−1 other individuals in the same dataset with matching QIDs.
Anonymization techniques can be designed to increase k of a dataset, by grouping together similar cohorts to form a large set of indistinguishable records. This is done by either generalizing or suppressing values of certain attributes. For example, generalization techniques for cities could be represented by less specific administrative regions such as counties, states, or countries. Suppression techniques would remove values entirely, for example removing low population zip codes entirely. By affecting the disclosed attributes using generalization and/or suppression, the k-anonymity of a dataset can be increased.
Existing conventional relational databases typically cannot maintain k-anonymity for responses to queries when the contents of the relational database changes. For many database applications, new records may be continually added and/or deleted from the relational database, which can cause the data anonymity of answers to queries to fall below a desired minimum level; for example, the k value can fall below the desired minimum when a record(s) is removed from or added to the database. Accordingly, what is needed is a system embodying a computer-implemented process that can be utilized to dynamically apply a specified level of k-anonymity to the results or answers to queries sent to a relational database under changing conditions, thus maintaining a desired level of anonymity or privacy regardless of database changes.
Embodiments consistent with the present invention include systems, processes and computer program products are configured to apply k-anonymity to an answer to a query sent to a relational database. A query to the relational database is obtained, the relational database containing a plurality of records. A frequency of occurrence of the attributes in the relational database is determined, an anonymization rule set is created based on the frequency of occurrence of the attributes, the anonymization rule set defining which attributes are to be suppressed in the answer to the query, the anonymization rule set is used to generate the answer to the query, wherein the answer to the query has k-anonymity, and a display or other device is controlled based on the answer to the query.
In some embodiments described herein, the anonymization rule set is used by the system to generate the answer to the query having k-anonymity.
The accompanying drawings, which are incorporated into and constitutes a part of this specification, illustrate some implementations of the invention and together with the description, serve to explain the principles of the invention.
In practice, achieving anonymization or pseudo-anonymization requires making it difficult to associate individual records with high certainty to individual people. Identity disclosure is when an outside party can confidently identify a subject or respondent from a dataset. A first line of defense involves removing all directly identifying attributes such as names, social security numbers, and account numbers from released data. For many practical applications, simply removing or masking direct identifiers in data is not sufficient to protect against identity disclosure, due to the presence of quasi-identifiers.
As previously noted, some existing conventional relational databases implement k-anonymity for responses to queries. As also noted, however, such conventional relational databases typically cannot always maintain a needed level of k-anonymity because the contents of the relational database changes over time. For example, the database may continuously add and/or delete new data, and the new data may have new or additional attributes that were not present in previous data held in the database; for example, a person may be added who resides in a U.S. state different from the states of residence of all the other persons in the database. In such a situation, disclosure of the state of residence of that person may result in the k-anonymization of the answers to queries falling below the k level needed to maintain anonymity.
The embodiments of systems and methods described herein k-anonymize outgoing data using the changed contents of the database, determining which attributes to suppress to achieve the needed k-anonymity in the results or answers to queries, which is difficult to implement.
The property of k-anonymity characterizes the degree of protection of a dataset against linking on quasi-identifiers, (or QIDs). In practice, k-anonymity is measured by first identifying the set of QIDs. Individuals are then grouped into cohorts or groups, where each cohort is comprised of records with the same value for the set of QIDs. For example, the set of all unmarried female patients, aged 40-50, within the state of Wisconsin could define a cohort. The value of k in k-anonymous release (i.e., the release of data from the database in response to a query or the like) is then defined as the number of individuals contained in the smallest cohort. In practice, the re-identification probability of any single record is 1/k, with larger values of k resulting in a reduced re-identification probability, which corresponds to more privacy.
The value of k can be increased by suppressing or generalizing released attributes. For example, gender may be suppressed (e.g., by deleting) for a small cohort, effectively doubling the cohort size; alternatively, a set of small, nearby postal codes can be associated together (generalized), for example by replacing those postal codes with the name of the metropolitan area, thereby increasing cohort size. In practice, an organization may define some minimum value of k (such as 11) and the embodiments described herein may suppress and/or generalize attributes until that minimal value of k is achieved across all cohorts.
There are numerous methods to produce a k-anonymous release. Broadly, these methods can be categorized into two classes: global and local methods. Global methods define a rule(s) to generalize or suppress an attribute(s) which applies uniformly across all records. Suppressing gender and releasing the first three digits of a US postal code for all records would be an example of global methods.
Local methods suppress or generalize attributes in a manner that depends on the contents of each record. For example, it may be permissible to disclose the occupation attribute for lawyers in Washington D.C., but not in a small locality such as Hamlet, Ohio, because the small number of lawyers in Hamlet, Ohio does not allow for enough k-anonymity. Global methods are generally simple to implement in a system but over-coarsen the data release, which makes the data release less useful. Local methods provide more information content, but can be difficult to implement in a system, and the privacy protections are fragile to the introduction of new data, (which terminology includes the removal of existing data). In all cases, there are numerous schemes that can be employed to achieve a k-anonymous release. Various embodiments described herein define a series of rules, preserve a useful amount of information content in the data, adjust the rules to handle additional data being added to (or deleted from) the database, and/or can be applied across numerous relational database technologies.
In some embodiments described herein, a system, a method, or a computer program product can utilize a Decision Tree (DT) data structure. A DT is a composite structure, built using a series of binary decisions. When applied to a dataset, the DT partitions the data into a series of non-overlapping groups (called partition elements), where each record in a dataset is mapped to one and only one partition element.
In some embodiments, the k-anonymity condition is achieved from the DT with the application to three operations. One example operation can include terminating the DT when a partition cannot be found that divides the dataset into two partition elements, each having at least k records. Another example operation can include extracting the conditions for each partition element. Another example operation can include releasing only the values of attributes which are homogenous (i.e. singled-valued) for the records within a given partition element.
In one usage example, a user may use the client device 110 to send a query 112 (e.g., a request for data from a database) to the computing system 116, which provides a result 114. Computing system 116, in accordance with aspects of the present disclosure, may be configured to receive the query and to communicate with the relational database 120. The computing system may be further configured to provide a result 114 to the client device 110, where the result is k-anonymous in accordance with embodiments disclosed herein. The computing system 116 may be configured to receive settings (e.g., from a system administrator or another computer), such as a value of k and other process control parameters (see
The computing device 210 may include a bus 214, a processor 216, a main memory 218, a read only memory (ROM) 220, a storage device 224, an input device 228, an output device 232, and a communication interface 234. Bus 214 may include a path that permits communication among the components of device 210. Processor 216 may be or include a processor, a microprocessor, an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), or another type of processor that interprets and executes instructions. Main memory 218 may include a random access memory (RAM) or another type of dynamic storage device that stores information or instructions for execution by processor 216. ROM 220 may include a ROM device or another type of static storage device that stores static information or instructions for use by processor 216. Storage device 224 may include a magnetic storage medium, such as a hard disk drive, or a removable memory, such as a flash memory.
Input device 228 may include a component that permits an operator to input information to device 210, such as a control button, a keyboard, a keypad, or another type of input device. Output device 232 may include a component that outputs information to the operator, such as a light emitting diode (LED), a display, or another type of output device. Communication interface 234 may include any transceiver-like component that enables device 210 to communicate with other devices or networks. In some implementations, communication interface 234 may include a wireless interface, a wired interface, or a combination of a wireless interface and a wired interface. In embodiments, communication interface 234 may receive computer readable program instructions from a network and may forward the computer readable program instructions for storage in a computer readable storage medium (e.g., storage device 224).
System 200 may perform certain operations, as described in detail below. System 200 may perform these operations in response to processor 216 executing software instructions contained in a computer-readable medium, such as main memory 218. A computer-readable medium may be defined as a non-transitory memory device and is not to be construed as being transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission media (e.g., light pulses passing through a fiber-optic cable), or electrical signals transmitted through a wire. A memory device may include memory space within a single physical storage device or memory space spread across multiple physical storage devices.
The software instructions may be read into main memory 218 from another computer-readable medium, such as storage device 224, or from another device via communication interface 234. The software instructions contained in main memory 218 may direct processor 216 to perform processes that will be described in greater detail herein. Alternatively, hardwired circuitry may be used in place of or in combination with software instructions to implement processes described herein. Thus, implementations described herein are not limited to any specific combination of hardware circuitry and software.
In some implementations, system 200 may include additional components, fewer components, different components, or differently arranged components than are shown in
The system may be connected to a communications network (not shown), which may include one or more wired and/or wireless networks. For example, the network may include a cellular network (e.g., a second generation (2G) network, a third generation (3G) network, a fourth generation (4G) network, a fifth generation (2G) network, a long-term evolution (LTE) network, a global system for mobile (GSM) network, a code division multiple access (CDMA) network, an evolution-data optimized (EVDO) network, or the like), a public land mobile network (PLMN), and/or another network. Additionally, or alternatively, the network may include a local area network (LAN), a wide area network (WAN), a metropolitan network (MAN), the Public Switched Telephone Network (PSTN), an ad hoc network, a managed Internet Protocol (IP) network, a virtual private network (VPN), an intranet, the Internet, a fiber optic-based network, and/or a combination of these or other types of networks. In embodiments, the communications network may include copper transmission cables, optical transmission fibers, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers.
The computing device 210 shown in
One of ordinary skill will recognize that the components, arrangement, and implementation details of the computing system 116, 210 are examples presented for conciseness and clarity of explanation. Other components, implementation details, and variations may be used, including adding, combining, or subtracting components and functions. Additionally, the functionality carried out by the computing system 116, 210 could be performed by the relational database 120.
In 264, a frequency of occurrence of the attributes of the records in the relational database is determined, for example, by the computing system 116. In various embodiments, this may be accomplished by counting the number of times each unique attribute occurs in the database 120. For example, for an attribute indicating a U.S. state where a person or entity resides or is located, the system 116 may count or otherwise determine the number of times each state is listed in the database.
In 266, an anonymization rule set is created based on the frequency of occurrence of the attributes. The anonymization rule set defines which attribute or attributes are to be suppressed, anonymized, or obscured in the answer to the query (e.g., in the results 114). In various embodiments, when the anonymization rule set is applied to the results of the query 112, the rules change the query result 114 such that the result 114 exhibits k-anonymity. Details of how the anonymization rule set is created is further explained herein below.
In 268, the anonymization rule set is provided to be used to generate the answer to or the result of the query, wherein the answer or result to the query has k-anonymity. In some embodiments, operations 264 and 266 may be performed in response to obtaining the query at operation 262, and the computing system 116 may apply the rule set to the query results to generate the k-anonymized result 114 that is provided to the client device 110. In other embodiments, the rule set created in operation 266 may be stored by the computer system 116, and the stored rule set can be used to generate the k-anonymized result 114 each time a query 112 is obtained. In such embodiments, operations 264 and 266 may not be performed in response each time a query is obtained at operation 262; instead operations 264 and 266 may be performed only when there is a change to the database 120 and/or periodically, such as once every 10 minutes, once an hour, twice a day, once a day, once a week, once a month, or the like. Additionally, the system may determine when operations 264 and 266 need to be performed.
In 270, the embodiments disclosed herein generate the k-anonymous answer to the query using the generated rule set as explained herein. The answer can vary depending on the particular dataset. Additionally, the rule set can be updated or changed when data is added or deleted from the database, which can result in a change in the answer (such that the updated or changed rule set provides a different answer).
In 272, the answer can be used to control a display or other device to show or use the answer to the query. For example, the answer can be displayed on a display or other device, or the answer can be configured to control a display or other device. In some embodiments, the answer or result may be configured to be displayed in a particular format, or to be displayed with information in addition to the answer.
One of ordinary skill will recognize that the example of a process 260 shown in
In accordance with embodiments disclosed herein, the data in dataset 300 of
For example, in
Referring to
D12 is defined, in the example shown, by the condition STATE=OH AND CITY< >HAMLET, meaning that only the attributes for STATE can be released, with the attributes for CITY and OCCUPATION suppressed. D21 is defined, in the example shown, by the condition STATE< >OH AND OCCUPATION=ACCOUNTANT, meaning only the values for the attributes for OCCUPATION can be disclosed, with the attributes for CITY and STATE being suppressed. D22 is defined, in the example shown, by the condition STATE=OH AND OCCUPATION< >ACCOUNTANT, meaning only the value for the attributes for STATE can be released, with the attributes for OCCUPATION being suppressed.
Turning back to
Various embodiments that utilize a rule set to provide k anonymity to the answer (e.g., result 114) to a query (e.g., query 112) made to a relational database have several technical advantages. First, the rule set is comprised of a set of binary conditions. As such, the rule set can be used with many backend database technologies. This means in a common language, such as SQL or C++, determining whether to release each attribute in an answer (or result) to a query can be distilled into a CASE or SWITCH statement. A SQL example of this is shown in
In some embodiments, as mentioned previously, the techniques herein may include two distinct phases: Construction of the rule set and Application. Construction defines the process of identifying cohorts and building a rule set. Application involves applying the rule set to data.
In some embodiments, with an established data framework, the k-anonymization rule process can be divided into five primary steps, shown in the example of a flow diagram 900 of
Once data sources have been exposed, process control parameters can be defined, as shown in
In some embodiments, this operation creates a baseline size estimate for the raw cohorts. Optionally, cohorts which satisfy the minimum cohort size 1002 can be identified and segmented in 1106 into a trivially k-anonymous data table 1120. The trivially k-anonymous data table 1120 can be defined as considered trivially k-anonymous because these records can be disclosed with no additional policies applied to this subset. In some embodiments, the residual data is segmented 1108 into a Residual Data Table 1122. If the optional segmentation 1106 is not run, all data is collected in the Residual Data Table 1122. In some embodiments, the total number of records in this Residual Table is then counted in 1110. In some embodiments, if the Residual Table has at least the minimum number of cohorts, then the process moves onto tree construction (
In the example of
If the optional process represented in
In some embodiments, an INPUT DATA TABLE 1200 and the DEPTH 1201 are fed into the Tree Construction process, an example of which is represented in
In conjunction with the rule set augmentation, the data table can be split, in some examples, into two partition elements in 1240, creating a NEGATIVE DATA TABLE 1242 and a POSITIVE DATA TABLE 1244. The NEGATIVE DATA TABLE 1242 and the NEGATIVE RULE SET 1236 can be fed, once again, back into the Tree Construction Process, shown in
In some embodiments, this process iterates until each branch terminates. As such for each pass through the process, the number of partition elements can double, continually subdividing the overall dataset into closely associated sub-partitions. In some examples, with each partition element, the rule set can:
In some embodiments, in order to be disclosed, a given partition element must have some equality condition extracted from step 1232. In some embodiments, this means that, by definition, a partition consisting of a single partition element will contain no disclosures. This partition element can be defined as the fallback cohort. In some embodiments, the existence of this fallback cohort can guarantee that new data with unobserved attribute values can be added to the data table without risk of violating the k-anonymity constraint.
To illustrate this process, consider the data shown in
In some embodiments, the RULE EXTRACTION phase 941 can extract the set of conditions under which an attribute can be disclosed. Using the example shown in
In some examples, the RULE EXTRACTION phase 941 attempts to use the cohorts which have been identified in the TREE CONSTRUCTION phase 940 and generates disclosure rules for each QID attribute. This process has two sub-processes, identifying attribute disclosure rules (shown in
In some embodiments, the process for identifying attribute disclosure rules (e.g., the rule extraction process implementation shown in
This means that each attribute defaults to being suppressed. In some examples, the process SELECTS A COHORT RULESET in 1306. This ruleset can define:
Using the example shown in
Similarly, D21 can be defined, in the embodiment shown, by the following predicates and disclosed attributes:
In some embodiments, for a single cohort ruleset, the process IDENTIFIES DISCLOSED ATTRIBUTES in 1308. If there are disclosed attributes, the process can, in some examples, EXTRACT COHORT PREDICATE in 1310 and UPDATE THE ATTRIBUTE DISCLOSURE RULE in 1312 for the cohort's disclosed attributes. This portion of the process can repeat until all of the cohorts have been used to update the disclosure rules at 1314.
Using the example in
In some embodiments, the simplification process, shown in
In some embodiments, (A=X) would be the most common predicate, occurring in three out of four sub-conditions. The sub-conditions are then split into those containing the most common predicate in 1336, and those not containing the most common predicate in 1334. For 1336, the most common predicate is factored out in 1338. In some embodiments, the precondition is augmented, taking the product of this and the PRECONDITION in 1330, with the results fed back into the simplification process. In some examples, the complementary conditions are also fed back into the simplification process, separately. The result can be a factored version of the predicate. Using the example above, after one iteration the rule becomes:
After the second iteration:
In some embodiments, this is done for the disclosure rule for each attribute, simplifying the final disclosure condition.
Once the simplified disclosure rule set is in place, an end user can issue queries against a backing database 806, 816, 826 with a k-anonymization guaranteed in the answer (e.g., result 114) by utilizing the disclosure rule set. This is enabled using the rule application process 942, using for example, the subprocess shown in
In some embodiments, by substituting this statement everywhere the QID is referenced (SELECT, WHERE, GROUP BY, or JOIN statements), the k-anonymity constraint is satisfied.
In some examples, a system for applying k-anonymity constraints on data sources can include a virtualized database, an initial configuration, a baseline cohort definition process, a disclosure rule definition process, a disclosure rule simplification process, a rule application process, and a computer-readable data storage device storing program instructions that, when executed by the one or more processors, cause the system to perform operations including querying data stored in a database, storing configuration patterns, measuring the frequency of non-overlapping records, splitting data into distinct groups, substituting text, and controlling a display or other device to show or use the measured impact.
In some examples, the system can obtain a user defined minimum value for k. In some examples, the system can obtain a list of quasi-identifying attributes. In some examples, the system can obtain a back-end defined maximum policy size. In some examples, the system can obtain a user defined maximum depth. In some embodiments, the operations can include a decision tree construction process, a rule extraction process, a rule simplification process, a rule application process, or a combination thereof.
In some embodiments, the operations can include identification of trivial cohorts. In some examples, the operations can include determining a measure of disclosure information gain. In some examples, the operations can include generating a count table measuring the most frequently occurring attribute. In some examples, the decision tree construction process can include an assessment of cohort size, an assessment of policy size, an assessment of tree depth, or a combination thereof. In some embodiments, the rule extraction process can include extraction of cohort criteria. In some examples, the operations can include identification of frequently occurring sub-conditions. In some examples, the operations can include a factorization and fragmentation process over frequently occurring sub-conditions. In some embodiments, the rule application process can include a query parser. In some examples, the rule simplification process can include an attribute substitution process.
Embodiments disclosed herein include creating a rule set that can be used to apply k-anonymity to an answer or result to a query sent to a relational database. The rule set includes rules that define which attributes of each record in the database need to be suppressed. This allows the operator of the database or others to easily determine which attributes need to be suppressed to apply the needed k-anonymity. Additionally, embodiments provide the technical advantage that the rule set can be updated when new records are added to and/or deleted from the database, such that an operator or user does not need to determine which attributes to suppress for queries made regarding the new records. For example, if a new record is added to a database with an age attribute having a value of 105, where age did not need to be suppressed because there were a sufficient number of each of the age values such that disclosure of the age attribute in an answer to a query did not cause the k anonymity value to fall below the threshold, and no other records in the database have an age attribute of 105, then disclosure of the age 105 in response to a query made regarding the new record could result in a loss of anonymity. However, the systems and methods disclosed herein will update the ruleset to suppress disclosure of the age 105 to maintain k-anonymity.
Further, embodiments automatically produce an answer or result that is transformed from the data set per query without creating unnecessary duplicates and the transformed data in the answer achieve a specified level of k-anonymity.
Further, in some embodiments, the k-anonymity can be adjusted or tuned by inputting a value for k into the system, which is used in developing the rule set. Furthermore, other inputs, such as tree depth and policy size, may be input into the system and used to develop the rule set, allowing a user to further tune and adjust the k-anonymity applied to the answer.
Other embodiments of the invention will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the invention being indicated by the following claims.
This application claims the benefit and filing date of U.S. Provisional Application No. 62/979,845 filed on 21 Feb. 2020, which is hereby incorporated by reference in its entirety.
Number | Name | Date | Kind |
---|---|---|---|
7698250 | Dwork et al. | Apr 2010 | B2 |
8291509 | Kerschbaum et al. | Oct 2012 | B2 |
8375030 | Rane et al. | Feb 2013 | B2 |
8589443 | Cormode | Nov 2013 | B2 |
8600895 | Felsher | Dec 2013 | B2 |
8661047 | Talwar et al. | Feb 2014 | B2 |
9554203 | Pavlidi et al. | Jan 2017 | B1 |
9955277 | Alexandridis et al. | Apr 2018 | B1 |
10467234 | Nerurkar et al. | Nov 2019 | B2 |
10572673 | Nishitani | Feb 2020 | B2 |
10599867 | Bhowmick et al. | Mar 2020 | B2 |
10803197 | Liao et al. | Oct 2020 | B1 |
10922315 | Jeong et al. | Feb 2021 | B2 |
10983960 | Faith | Apr 2021 | B2 |
11003694 | Kenedy | May 2021 | B2 |
11055432 | Hockenbrocht et al. | Jul 2021 | B2 |
11068520 | Neumann et al. | Jul 2021 | B1 |
11163904 | Barbas et al. | Nov 2021 | B2 |
11698990 | McFall et al. | Jul 2023 | B2 |
20020143289 | Ellis et al. | Oct 2002 | A1 |
20070288427 | Ramer et al. | Dec 2007 | A1 |
20090287837 | Felsher | Nov 2009 | A1 |
20100268719 | Cormode | Oct 2010 | A1 |
20130159021 | Felsher | Jun 2013 | A1 |
20130246342 | Faith | Sep 2013 | A1 |
20150156578 | Alexandridis et al. | Jun 2015 | A1 |
20150286827 | Fawaz et al. | Oct 2015 | A1 |
20160224804 | Carasso | Aug 2016 | A1 |
20170235974 | Zhang et al. | Aug 2017 | A1 |
20180004978 | Hebert | Jan 2018 | A1 |
20180247072 | Hind | Aug 2018 | A1 |
20180349384 | Nerurkar et al. | Dec 2018 | A1 |
20180349473 | Smith | Dec 2018 | A1 |
20190026489 | Nerurkar et al. | Jan 2019 | A1 |
20190065775 | Klucar, Jr. et al. | Feb 2019 | A1 |
20190138743 | Nerurkar et al. | May 2019 | A1 |
20190266354 | Banerjee | Aug 2019 | A1 |
20190318121 | Hockenbrocht et al. | Oct 2019 | A1 |
20190362007 | Jeong et al. | Nov 2019 | A1 |
20200074107 | Barbas et al. | Mar 2020 | A1 |
20200167313 | Isoda | May 2020 | A1 |
20200380159 | Lilly et al. | Dec 2020 | A1 |
20210264057 | Murray | Aug 2021 | A1 |
20210397733 | Lilly et al. | Dec 2021 | A1 |
20230058906 | Jansen | Feb 2023 | A1 |
Entry |
---|
Notice of Allowance issued in U.S. Appl. No. 16/891,965 dated Nov. 2, 2022, 15 pages. |
Office Action issued in U.S. Appl. No. 17/352,276 dated Apr. 12, 2023, 21 pages. |
Office Action issued in U.S. Appl. No. 16/891,965 dated May 24, 2022, 14 pages. |
Notice of Allowance issued in U.S. Appl. No. 17/352,276 dated Jul. 28, 2023, 11 pages. |
Number | Date | Country | |
---|---|---|---|
20210264057 A1 | Aug 2021 | US |
Number | Date | Country | |
---|---|---|---|
62979845 | Feb 2020 | US |