The present disclosure relates generally to data distribution pipelines in a distributed-computing system, and more specifically, to ingesting multiple data streams by distributed-computing systems using secure and multi-directional data pipelines.
Modern distributed-computing systems are increasingly complex and can include thousands of host computing devices, virtual machines (VMs) and networking components, servicing an even larger number of clients. Systems operating in the clients' private networks produce massive volumes of machine-generated data (e.g., application logs, network traces, configuration files, messages, performance data, system state dumps, etc.). These data provide valuable information to system administrators as they manage these complex systems. These data can be useful in troubleshooting, discovering trends, detecting security problems, and measuring performance.
Data generated from systems operating in a client's private network often need to be distributed in multiple directions to multiple receivers or services. For example, they may need to be delivered to certain data collectors within the client's private network for providing on-premise services. They may also need to be delivered remotely to a cloud-services provider for providing various cloud-based services (e.g., software-as-a-service (SaaS)). As a result, the data often need to be delivered outside of the client's secure and private network infrastructure. Accordingly, there is a need for a secure and multi-directional data pipeline that enables bi-directional communications between the client's private network and a cloud-services provider's network, while also providing the capability of routing data within the client's private network for consumption by on-premise data collectors and services. Moreover, the secure and multi-directional data pipeline may need to deliver data in a substantially real time manner with high-throughput and low latency.
Described herein are techniques for ingesting data streams to a distributed-computing system using a multi-directional data ingestion pipeline. In one embodiment, a method for ingesting data streams includes, at a client gateway operating in a first computing environment having one or more processors and memory, receiving, from one or more data collectors operating in the first computing environment, a plurality of messages. The method further includes assigning the plurality of messages to one or more data streams; obtaining stream routing configurations; and identifying, based on the streaming routing configurations, one or more receivers. The method further includes determining, based on the identified one or more receivers of the one or more data streams, whether at least one of the one or more data streams is to be delivered to one or more receivers operating in the first computing environment. In accordance with a determination that at least one of the one or more data streams is to be delivered to one or more receivers operating in the first computing environment, the method further includes delivering the at least one of the one or more data streams to the one or more receivers operating in the first computing environment; and delivering the one or more data streams to a data ingress gateway operating in a second computing environment. The one or more data streams are distributed to one or more receivers operating in the second computing environment.
In one embodiment, a non-transitory computer-readable storage medium storing one or more programs configured to be executed by one or more processors is provided. The one or more programs stored by the non-transitory computer-readable storage medium include instructions for receiving, from one or more data collectors operating in the first computing environment, a plurality of messages. The one or more programs include further instructions for assigning the plurality of messages to one or more data streams; obtaining stream routing configurations; and identifying, based on the streaming routing configurations, one or more receivers. The one or more programs include further instructions for determining, based on the identified one or more receivers of the one or more data streams, whether at least one of the one or more data streams is to be delivered to one or more receivers operating in the first computing environment. In accordance with a determination that at least one of the one or more data streams is to be delivered to one or more receivers operating in the first computing environment, the one or more programs include further instructions for delivering the at least one of the one or more data streams to the one or more receivers operating in the first computing environment; and delivering the one or more data streams to a data ingress gateway operating in a second computing environment. The one or more data streams are distributed to one or more receivers operating in the second computing environment.
In one embodiment, a system for ingesting data streams to a distributed-computing system using a multi-directional data ingestion pipeline includes one or more processors and memory storing one or more programs configured to be executed by the one or more processors. The one or more programs include instructions for receiving, from one or more data collectors operating in the first computing environment, a plurality of messages. The one or more programs include further instructions for assigning the plurality of messages to one or more data streams; obtaining stream routing configurations; and identifying, based on the streaming routing configurations, one or more receivers. The one or more programs include further instructions for determining, based on the identified one or more receivers of the one or more data streams, whether at least one of the one or more data streams is to be delivered to one or more receivers operating in the first computing environment. In accordance with a determination that at least one of the one or more data streams is to be delivered to one or more receivers operating in the first computing environment, the one or more programs include further instructions for delivering the at least one of the one or more data streams to the one or more receivers operating in the first computing environment; and delivering the one or more data streams to a data ingress gateway operating in a second computing environment. The one or more data streams are distributed to one or more receivers operating in the second computing environment.
Described also herein are techniques for stream processing of one or more data streams ingested from a client gateway using a multi-directional data ingestion pipeline. In one embodiment, a method includes at a data ingress gateway operating in a second computing environment having one or more processors and memory, receiving a first data stream ingested from a client gateway operating in a first computing environment different from the second computing environment and obtaining, based on the first data stream and receiver registration information, a first delivery policy associated with a first receiver group including one or more receivers. The method further includes receiving a second data stream ingested from the client gateway. The second data stream is different from the first data stream and obtaining. The method further includes obtaining, based on the second data stream and the receiver registration information, a second delivery policy associated with a second receiver group including one or more receivers. The second delivery policy is different from the first delivery policy. The method further includes delivering the first data stream to the first receiver group in accordance with the first delivery policy and delivering the second data stream to the second receiver group in accordance with the second delivery policy.
In one embodiment, a non-transitory computer-readable storage medium storing one or more programs configured to be executed by one or more processors is provided. The one or more programs stored by the non-transitory computer-readable storage medium include instructions for receiving a first data stream ingested from a client gateway operating in a first computing environment different from the second computing environment and obtaining, based on the first data stream and receiver registration information, a first delivery policy associated with a first receiver group including one or more receivers. The one or more programs include further instructions for receiving a second data stream ingested from the client gateway. The second data stream is different from the first data stream. The one or more programs include further instructions for obtaining, based on the second data stream and the receiver registration information, a second delivery policy associated with a second receiver group including one or more receivers. The second delivery policy is different from the first delivery policy. The one or more programs include further instructions for delivering the first data stream to the first receiver group in accordance with the first delivery policy and delivering the second data stream to the second receiver group in accordance with the second delivery policy.
In one embodiment, a system for stream processing of one or more data streams ingested from a client gateway using a multi-directional data ingestion pipeline includes one or more processors and memory storing one or more programs configured to be executed by the one or more processors. The one or more programs stored by the non-transitory computer-readable storage medium include instructions for receiving a first data stream ingested from a client gateway operating in a first computing environment different from the second computing environment and obtaining, based on the first data stream and receiver registration information, a first delivery policy associated with a first receiver group including one or more receivers. The one or more programs include further instructions for receiving a second data stream ingested from the client gateway. The second data stream is different from the first data stream. The one or more programs include further instructions for obtaining, based on the second data stream and the receiver registration information, a second delivery policy associated with a second receiver group including one or more receivers. The second delivery policy is different from the first delivery policy. The one or more programs include further instructions for delivering the first data stream to the first receiver group in accordance with the first delivery policy and delivering the second data stream to the second receiver group in accordance with the second delivery policy.
In the following description of embodiments, reference is made to the accompanying drawings in which are shown by way of illustration specific embodiments that can be practiced. It is to be understood that other embodiments can be used and structural changes can be made without departing from the scope of the various embodiments.
As described above, traditional stream processing pipelines are often rigid and incapable of distributing data streams in multiple directions for consumption by both on-premise and cloud-based services. The techniques described in this application enables (1) collecting data from multiple data sources and platform; (2) on-premise data sharing between the data collectors without the requirements to route through a cloud gateway or server in a cloud-services computing environment; (3) delivering data streams to multiple receivers and services in the cloud-services computing environment; and (4) delivering data streams in a multi-directional manner to receivers and services across multiple computing environments. As a result, data can be distributed flexibly from multiple data sources and platforms to multiple destinations in both the client computing environments (e.g., an on-premise client's private network) and the cloud-services computing environments. Analysis and services can thus be performed in a substantially real-time manner regardless where the analysis and services are performed, either on-premise or in-cloud. The capability of efficiently delivering or routing data stream within the client computing environment and/or to the cloud-services computing environment improves data throughput, reduces latency for data delivery, increases data analysis frequency and data resolution, and therefore enhances overall system operational efficiency.
Moreover, the gateway techniques described in this application facilitate delivering different data streams to different receivers or receiver groups based on different delivery policies. These techniques improve the data delivery efficiency and flexibility because it enables multiple data streams to be multiplexed for delivery while allowing customization of the delivery policies on a per-stream basis. Thus, any single end-to-end data stream delivered from a particular data collector operating in the client computing environment to a particular receiver operating in the cloud-services computing environment can be customized for a particular delivery policy. The routing performance of the data distribution system is thus improved.
Virtualization layer 110 is installed on top of hardware platform 120. Virtualization layer 110, also referred to as a hypervisor, is a software layer that provides an execution environment within which multiple VMs 102 are concurrently instantiated and executed. The execution environment of each VM 102 includes virtualized components analogous to those comprising hardware platform 120 (e.g. a virtualized processor(s), virtualized memory, etc.). In this manner, virtualization layer 110 abstracts VMs 102 from physical hardware while enabling VMs 102 to share the physical resources of hardware platform 120. As a result of this abstraction, each VM 102 operates as though it has its own dedicated computing resources.
Each VM 102 includes operating system (OS) 106, also referred to as a guest operating system, and one or more applications (Apps) 104 running on or within OS 106. OS 106 (e.g., Darwin, RTXC, LINUX, UNIX, OS X, iOS, WINDOWS, or an embedded operating system such as VxWorks) includes various software components and/or drivers for controlling and managing general system tasks (e.g., memory management, storage device control, power management, etc.) and facilitates communication between various hardware and software components. As in a traditional computing environment, OS 106 provides the interface between Apps 104 (i.e. programs containing software code) and the hardware resources used to execute or run applications. However, in this case the “hardware” is virtualized or emulated by virtualization layer 110. Consequently, Apps 104 generally operate as though they are in a traditional computing environment. That is, from the perspective of Apps 104, OS 106 appears to have access to dedicated hardware analogous to components of hardware platform 120.
It should be appreciated that applications (Apps) implementing aspects of the present disclosure are, in some embodiments, implemented as applications running within traditional computing environments (e.g., applications run on an operating system with dedicated physical hardware), virtualized computing environments (e.g., applications run on a guest operating system on virtualized hardware), containerized environments (e.g., applications packaged with dependencies and run within their own runtime environment), distributed-computing environments (e.g., applications run on or across multiple physical hosts) or any combination thereof. Furthermore, while specific implementations of virtualization and containerization are discussed, it should be recognized that other implementations of virtualization and containers can be used without departing from the scope of the various described embodiments.
In some embodiments, systems operating in client computing environment 210 can initiate communication with other computing environments (e.g., cloud-services computing environment 220) outside of the on-premise client's network infrastructure. For example, systems operating in client computing environment 210 (e.g., a client gateway 332 described with reference to
In some embodiments, for data security, systems operating in client computing environment 210 establishes a connection and initiates communication with cloud-services computing environment 220. Systems operating in cloud-services computing environment 220 may not initiate communication but may respond to requests or data delivery from client computing environment 210 after the communication between two environments is established. In some examples, after a connection (e.g., HTTP, HTTP/2, TCP) is established by systems operating in client computing environment 210, the communication between computing environments 210 and 220 can be bi-directional. For example, data streams can be delivered from client computing environment 210 to cloud-services computing environment 220. Acknowledgements, delivery status responses, and/or commands can be delivered from cloud-services computing environment 220 to client computing environment 210.
In some embodiments, gateways (332 and 340 shown in
As illustrated in
With reference to
In some embodiments, data collectors 322 can also collect data from network virtualization and security platforms 314. Network virtualization and security platforms 314 abstracts network operations from underlying hardware onto a distributed virtualization layer, similar to server virtualization of processors and operating systems. For example, network virtualization and security platforms 314 provides logic switching, routing, distributed firewalling, load balancing, virtual private networking, application programming interfaces, dynamic security management, log management, and/or other network and security operations. Data generated during these operations may need to be provided for analyzing and optimizing network and security performances, and therefore are provided to one or more data collectors 322, as illustrated in
In some embodiments, after one or more data collectors 322 receive data (e.g., messages) from data sources 312 and/or network virtualization and security platforms 314, data collector 322 can forward the data to client gateway 332 with or without further processing the data. As an example, data collectors 322 can forward the received messages to client gateway 332 associated with forwarder 230 without processing. As another example, data collectors 322 include one or more processing pipelines that can process the received messages (e.g., extracting payloads, annotating payloads, categorizing payloads, or the like) and then forward the processed messages to client gateway 332 associated with forwarder 230.
As illustrated in
In some embodiments, messages 324A-N include information (e.g., a data field) identifying which data collectors collected, processed, and/or forwarded the messages. For example, a particular message forwarded by data collector 324A can include a data field (e.g., a header) that indicates the particular message is collected, processed, and/or forwarded by data collector 322A. In some embodiments, to assign a particular message to a data stream, client gateway 332 obtains the information included in the message that identifies the collector that collected, processed, and/or forwarded the particular message to client gateway 332. Based on the identification of the data collector associated with the particular message, client gateway 332 identifies a particular data stream associated with the particular data collector that collected, processed, and/or forwarded the particular message. In some embodiments, client gateway 332 performs this identification using predetermined collector-stream association stored in, for example, client configuration resources 334. For example, a particular data stream may be assigned a stream name or ID and associated with a particular data collector. All messages collected by the particular data collector can be assigned to the corresponding data stream. In some embodiments, assigning a particular message to a particular corresponding data stream can include associating a tag to the particular message, wherein the tag uniquely identifies the particular data stream. As a result, all messages that belong to the same data stream are associated with a same tag.
In some embodiments, a data stream is further associated with and/or identified by the receivers/subscribers of the stream and/or the type of delivery policy to be used for delivering the data stream (e.g., asynchronous or synchronous delivery). The association of data streams with receivers/subscribers and delivery policies is described below in more detail with reference to
In some embodiments, the data streams generated by client gateway 332 (e.g., based on assigning messages to data streams) can be further processed before they are delivered to one or more receivers operating in client computing environment 210 and/or cloud-services computing environment 220. As illustrated in
In some embodiments, after client gateway 332 assigns messages received from data collectors 322A-N to one or more data streams, and the messages are optionally further processed, client gateway 332 obtains stream routing configurations for routing or delivering the data streams to their destinations. In some embodiments, client gateway 332 is configured such that a particular data stream can be delivered not only to remote destinations within cloud-services computing environment 220 for performing cloud-based services (e.g., SaaS services), but also to on-premise destinations within client computing environment 210 for performing on-premise data analysis and services.
As illustrated in
With reference to
In some embodiments, based on the identified one or more receivers, client gateway 332 determines whether one or more data streams are to be delivered to one or more receivers operating in the client computing environment 210. If so, client gateway 332 delivers the one or more data streams to the receivers operating in client computing environment 210. For example, based on the identification of data collector 372 as being an on-premise receiver, client gateway 332 delivers data stream 338A to data collector 372, which may in turn provide data stream 338A to on-premise services 402 for performing on-premise analysis and services (e.g., issue analysis, monitoring, alerting, provisioning, optimization, or the like) using the messages included in data stream 338A. Client gateway 332 thus enables on-premise or local data sharing between the data collectors without the requirements to route through a cloud gateway or server. As a result, the analysis and services can be performed on-premise in a substantially real time manner. The capability of efficiently delivering or routing data stream within client computing environment 210 improves data throughput, reduces latency for data delivery, increases data analysis frequency and data resolution, and enhances overall system operation efficiency.
With reference to
In some embodiments, in addition to delivering the one or more data streams to receivers operating in client computing environment 210, client gateway 332 can also deliver the data streams to a cloud gateway (e.g., a data ingress gateway) operating in a cloud-services computing environment 220. As an example and with reference to
With reference to
With reference back to
As illustrated in
In some embodiments, one or more messages in a data stream can include path fields indicating the destination of the data stream. The destination can be, for example, one or more receivers in a receiver group (e.g., receiver groups 350A-N) or one or more service agents (e.g., service agents 352A-N). As one example, a path field of a message in a particular stream includes a stream identification. A particular receiver or multiple receivers in a receiver group can be pre-registered with cloud gateways 340 to be a receiver or receivers for receiving data streams with a particular stream identification. The receiver registration information can be represented or included in, for example, a routing table. As a result, the stream identification included in the path field of a message and the receiver registration information can be used by cloud gateways 340 to identify the particular receiver or receivers in a receiver group for receiving the particular data stream. Similarly, using destination information and a routing table, cloud gateways 340 can also identify one or more service agents 352A-N for receiving particular data streams. In some examples, the routing table, which can include the receiver registration information, is stored in cloud configuration resources 358 accessible by cloud gateways 340.
In some embodiments, a receiver group including one or more receivers can be associated with a data stream delivery policy. Different receiver groups can have different delivery policies.
As described above, in some examples, different receiver groups can have different data stream delivery policies. A cloud gateway can obtain the delivery policy associated with a particular data stream. As illustrated in
In some embodiments, different data stream delivery policies (e.g., policies 526) can be associated with receiver group 350A and receiver group 350B. For example, a wait-for-all policy may be associated with receiver group 350A and a wait-for-one policy may be associated with receiver group 350B. With reference to
As illustrated in
With reference to
As further illustrated in
With reference to
As described above, receivers 552A and 552B are publish-subscribe type receivers and therefore, messages in data streams routed by cloud gateway 340A are delivered to the subscribers 348A-N in response to the subscribers' requests. As a result, the messages may not be delivered in real time or substantially real time, depending on the frequencies subscribers request for data. In some embodiments, with reference back to
With reference to
As another example, multiple data streams can be routed or delivered in accordance with a predefined order determined based on priorities associated with one or more subscribers. For instance, with reference to
As another example, multiple data streams can be routed or delivered dynamically based on one or more network-related conditions. For example, with reference back to
With reference to
At block 602, a plurality of messages is received at the client gateway from one or more data collectors operating in the first computing environment (e.g., data collectors 332 operating in client computing environment 210 described with reference to
At block 604, the plurality of messages is assigned to one or more data streams (e.g., data streams 212 described with reference to
At block 606, stream routing configurations (e.g., routing tables) are obtained by the client gateway (e.g., client gateway 332 described with reference to
At block 608, one or more receivers (e.g., data collector 372 and service agents 404 described with reference to
At block 610, whether at least one of the one or more data streams is to be delivered to one or more receivers operating in the first computing environment is determined based on the identified one or more receivers of the one or more data streams.
At block 612, if it is determined that at least one of the one or more data streams is to be delivered to one or more receivers operating in the first computing environment, the at least one of the one or more data streams (e.g., data streams 338A-B) are delivered to the one or more receivers operating in the first computing environment.
At block 614, the one or more data streams (e.g., data streams 212) are delivered to a data ingress gateway operating in a second computing environment (e.g., cloud gateways 340 operating in cloud-services computing environment 220). The one or more data streams are distributed to one or more receivers operating in the second computing environment.
At block 622, a first data stream (e.g., data stream 362A described with reference to
At block 624, based on the first data stream and receiver registration information, a first delivery policy associated with a first receiver group (e.g., receiver group 350A described with reference to
At block 626, a second data stream (e.g., data stream 362B described with reference to
At block 628, based on the second data stream and the receiver registration information, a second delivery policy associated with a second receiver group (e.g., receiver group 350B described with reference to
At block 630, the first data stream is delivered to the first receiver group in accordance with the first delivery policy.
At block 632, the second data stream is delivered to the second receiver group in accordance with the second delivery policy.
In accordance with some implementations, a computer-readable storage medium (e.g., a non-transitory computer readable storage medium) is provided, the computer-readable storage medium storing one or more programs for execution by one or more processors of an electronic device, the one or more programs including instructions for performing any of the methods or processes described herein.
The foregoing descriptions of specific embodiments have been presented for purposes of illustration and description. They are not intended to be exhaustive or to limit the scope of the claims to the precise forms disclosed, and it should be understood that many modifications and variations are possible in light of the above teaching.
This application is a Division of U.S. patent application Ser. No. 16/047,968, entitled “SECURE MULTI-DIRECTIONAL DATA PIPELINE FOR DATA DISTRIBUTION SYSTEMS,” filed Jul. 27, 2018, and relates to U.S. patent application Ser. No. 16/047,755, entitled “BIDIRECTIONAL COMMAND PROTOCOL VIA A UNIDIRECTIONAL COMMUNICATION CONNECTION FOR RELIABLE DISTRIBUTION OF TASKS,” filed on Jul. 27, 2018, the content of which are incorporated by reference for all purposes.
Number | Name | Date | Kind |
---|---|---|---|
6047323 | Krause | Apr 2000 | A |
8045557 | Sun | Oct 2011 | B1 |
8817777 | Manian et al. | Aug 2014 | B2 |
9940284 | Davis | Apr 2018 | B1 |
10148694 | Sarin | Dec 2018 | B1 |
10417281 | Schechter | Sep 2019 | B2 |
10466933 | Bysani Venkata Naga et al. | Nov 2019 | B1 |
11303636 | Back | Apr 2022 | B2 |
20020111811 | Bares | Aug 2002 | A1 |
20040177110 | Rounthwaite | Sep 2004 | A1 |
20060143459 | Villaron | Jun 2006 | A1 |
20080031230 | Kacher et al. | Feb 2008 | A1 |
20080186962 | Sinha | Aug 2008 | A1 |
20100250767 | Barreto | Sep 2010 | A1 |
20110060627 | Piersol | Mar 2011 | A1 |
20120036262 | Murphy | Feb 2012 | A1 |
20120131146 | Choi | May 2012 | A1 |
20120323990 | Hayworth | Dec 2012 | A1 |
20130007183 | Sorenson et al. | Jan 2013 | A1 |
20130039360 | Manian et al. | Feb 2013 | A1 |
20130061046 | Joy et al. | Mar 2013 | A1 |
20130198266 | Kiley et al. | Aug 2013 | A1 |
20140079059 | Amir | Mar 2014 | A1 |
20150172183 | DeCusatis | Jun 2015 | A1 |
20150188949 | Mahaffey et al. | Jul 2015 | A1 |
20150281181 | Albisu | Oct 2015 | A1 |
20150319226 | Mahmood | Nov 2015 | A1 |
20160088022 | Handa et al. | Mar 2016 | A1 |
20160100023 | Kim | Apr 2016 | A1 |
20160142293 | Hu | May 2016 | A1 |
20160164836 | Roberson | Jun 2016 | A1 |
20160241633 | Overby et al. | Aug 2016 | A1 |
20160344841 | Wang et al. | Nov 2016 | A1 |
20160373445 | Hayton et al. | Dec 2016 | A1 |
20170006006 | Rawcliffe et al. | Jan 2017 | A1 |
20170006030 | Krishnamoorthy et al. | Jan 2017 | A1 |
20170063968 | Kitchen et al. | Mar 2017 | A1 |
20170070398 | Singhal | Mar 2017 | A1 |
20170111452 | Thazhathethil | Apr 2017 | A1 |
20170257257 | Dawes et al. | Sep 2017 | A1 |
20170264649 | Sonar et al. | Sep 2017 | A1 |
20180007002 | Landgraf | Jan 2018 | A1 |
20180041437 | Nishijima et al. | Feb 2018 | A1 |
20180081934 | Byron | Mar 2018 | A1 |
20180084073 | Walsh et al. | Mar 2018 | A1 |
20180091391 | Turow et al. | Mar 2018 | A1 |
20180091621 | Kuo et al. | Mar 2018 | A1 |
20180167476 | Hoffner et al. | Jun 2018 | A1 |
20180227298 | Khalil | Aug 2018 | A1 |
20180246944 | Yelisetti et al. | Aug 2018 | A1 |
20190132329 | Verberkt et al. | May 2019 | A1 |
20190260757 | Ernesti | Aug 2019 | A1 |
20190268310 | Guberman | Aug 2019 | A1 |
20190306242 | Thummalapalli et al. | Oct 2019 | A1 |
20200036773 | Dar et al. | Jan 2020 | A1 |
20200036811 | Dar et al. | Jan 2020 | A1 |
20220210245 | Dar et al. | Jun 2022 | A1 |
Number | Date | Country |
---|---|---|
WO-03103302 | Dec 2003 | WO |
WO-2009134755 | Nov 2009 | WO |
WO-2016016215 | Feb 2016 | WO |
WO-2017062544 | Apr 2017 | WO |
WO-2019050508 | Mar 2019 | WO |
Entry |
---|
Final Office Action received for U.S. Appl. No. 16/047,755, dated Oct. 13, 2020, 42 pages. |
Non-Final Office Action received for U.S. Appl. No. 16/047,755, dated Feb. 12, 2021, 38 pages. |
Non-Final Office Action received for U.S. Appl. No. 16/047,755, dated Jan. 16, 2020, 33 pages. |
Pasquier et al., “CamFlow: Managed Data-Sharing for Cloud Services”, IEEE Transactions on Cloud Computing, vol. 5, No. 3, 2015, pp. 472-484. |
Rekik et al., “A Comprehensive Framework for Business Process Outsourcing to the Cloud”, IEEE International Conference on Services Computing (SCC), 2016, pp. 179-186. |
Non-Final Office Action received for U.S. Appl. No. 16/047,968, dated Aug. 19, 2020, 10 pages. |
Notice of Allowance received for U.S. Appl. No. 16/047,968, dated Jan. 22, 2021, 8 pages. |
Non-Final Office Action received for U.S. Appl. No. 17/510,142, dated Oct. 5, 2022, 51 pages. |
Number | Date | Country | |
---|---|---|---|
20210273990 A1 | Sep 2021 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 16047968 | Jul 2018 | US |
Child | 17322817 | US |