A number of scenarios exist where multiple services need to store, update and/or access state information, such as configuration information and test results. It is often difficult in distributed systems, however, for professionals, such as technology professionals, to manage the state information associated with such services.
Illustrative embodiments of the disclosure provide techniques for management of state information using a message queue. One method comprises, in response to a change in state information associated with a given service in an information technology infrastructure: updating the state information associated with the given service; and publishing the updated state information associated with the given service to one or more topics of a message queue of the information technology infrastructure, wherein the updated state information is consumed from the message queue by at least one additional service, based at least in part on one or more topic subscriptions, to update respective state information maintained by the at least one additional service.
In some embodiments, the given service comprises a processor-based software testing checker that performs a scan of software. The one or more topics of the message queue may serve as a state store. The state information may comprise one or more of a current configuration and a current state of one or more of (i) software, (ii) a system and (iii) an entity.
In one or more embodiments, the state information comprises dynamic information for a plurality of services, and wherein the given service updates a respective portion of the state information in a local cache of the given service and publishes the updated state information, comprising the dynamic information for the plurality of services, to one or more topics on the message queue from the local cache. One or more of the plurality of services may maintain a respective local cache of the state information and update at least a portion of the respective local cache in response to updated state information received from the message queue. The update to the at least the portion of the respective local cache in response to the updated state information received from the message queue may be implemented at least in part by a queue manager associated with the respective local cache.
Illustrative embodiments provide significant advantages relative to conventional techniques for managing state information. For example, technical problems associated with the management of state information are mitigated in one or more embodiments by employing a message queue that allows the state information to be updated and distributed using a publish/subscribe model.
Other illustrative embodiments include, without limitation, apparatus, systems, methods and computer program products comprising processor-readable storage media.
Illustrative embodiments of the present disclosure will be described herein with reference to exemplary communication, storage and processing devices. It is to be appreciated, however, that the disclosure is not restricted to use with the particular illustrative configurations shown. One or more embodiments of the disclosure provide methods, apparatus and computer program products for management of state information using a message queue.
In one or more embodiments, techniques are provided for event-based state information management. The term “state information,” as used herein, shall be broadly construed so as to encompass dynamic information characterizing a current state and/or a current configuration of software, a system or an entity, as would be apparent to a person of ordinary skill in the art.
In some embodiments, a message queue may be used to store state information and to notify services or other entities upon one or more changes in the state information records. The state information may comprise, for example, state information for multiple services (or other entities), and a given service (or another entity) may update its own portion of the state information in a local cache of the given service and then publish the updated state information (comprising the state information for the multiple services or entities) to one or more topics on a message queue from the local cache. One or more of the additional multiple services may maintain a respective local cache of the state information and update the respective local cache, or portions thereof, in response to obtaining updated state information from the message queue.
In the example of
The event sources 110 may be configured, in at least some embodiments, to send as much information as possible for as many events as possible to the event dispatcher 120. The event dispatcher 120 provides the messages to a sequential message queue 105, such as an enterprise service bus (ESB), where each message is published on the sequential message queue 105. One or more of the message consumers 150 consume one or more of the published messages on the sequential message queue 105. In the example of
The sequential message queue 105 may be implemented, for example, as an ESB, a distributed event streaming platform, a distributed messaging system or using message-oriented middleware. An ESB is a software platform used to distribute work among connected components of an application. The ESB is designed to provide a uniform means of moving work, offering applications the ability to connect to the ESB and to subscribe to messages. In some embodiments, the sequential message queue 105 may be implemented, at least in part, using the techniques described in U.S. Pat. No. 11,722,451, incorporated by reference herein in its entirety.
In some embodiments, the sequential message queue 105 supports publishing (e.g., writing) streams of events and subscribing to (e.g., reading) the published streams of events. The sequential message queue 105 may also store the streams of events durably and reliably. A message storage service (not shown in
The topic message store 160 in the present embodiment may be implemented using one or more storage systems associated with the sequential message queue 105. Such storage systems can comprise any of a variety of different types of storage such as, network-attached storage (NAS), storage area networks (SANs), direct-attached storage (DAS) and distributed DAS, as well as combinations of these and other storage types, including software-defined storage.
In addition, the message storage service associated with the sequential message queue 105 may also notify one or more of the message consumers 150 of the availability of new published event messages on the sequential message queue 105. In some embodiments, the message storage service will notify those message consumers 150 that subscribed to any of the topics 165 where the new published event message was published. In a further variation, the message consumers 150 can look for new messages on the sequential message queue 105.
In addition, one or more of the message consumers 150, such as message consumer 150-P in the example of
One or more of the sequential message queue 105, event sources 110, event dispatcher 120, message consumers 150 and database 170 may be coupled to a network, where the network in this embodiment is assumed to represent a sub-network or other related portion of a larger computer network. The network is assumed to comprise a portion of a global computer network such as the Internet, although other types of networks can be part of the computer network, including a wide area network (WAN), a local area network (LAN), a satellite network, a telephone or cable network, a cellular network, a wireless network such as a Wi-Fi or WiMAX network, or various portions or combinations of these and other types of networks. The network in some embodiments therefore comprises combinations of multiple different types of networks, each comprising processing devices configured to communicate using internet protocol (IP) or other related communication protocols.
It is to be appreciated that the term “user” is intended to be broadly construed so as to encompass, for example, human, hardware, software or firmware entities, as well as various combinations of such entities. Compute and/or storage services may be provided for users under a Platform-as-a-Service (PaaS) model, an Infrastructure-as-a-Service (IaaS) model, a Storage-as-a-Service (STaaS) model and/or a Function-as-a-Service (FaaS) model, although it is to be appreciated that numerous other cloud infrastructure arrangements could be used. Also, illustrative embodiments can be implemented outside of the cloud infrastructure context, as in the case of a stand-alone computing and storage system implemented within a given enterprise.
One or more of the sequential message queue 105, event sources 110, event dispatcher 120, message consumers 150 and database 170 illustratively comprise (or employ) processing devices of one or more processing platforms. For example, the event sources 110 may execute on one or more processing devices each having a processor and a memory, possibly implementing virtual machines and/or containers, although numerous other configurations are possible. The processor illustratively comprises a microprocessor, a microcontroller, an application-specific integrated circuit (ASIC), a field-programmable gate array (FPGA) or other type of processing circuitry, as well as portions or combinations of such circuitry elements.
One or more of the sequential message queue 105, event sources 110, event dispatcher 120, message consumers 150 and database 170 can additionally or alternatively be part of cloud infrastructure.
It is to be appreciated that this particular arrangement of elements 114 and 154 illustrated in the information processing system 100 of the
The exemplary event dispatcher 120, for example, may include one or more additional modules and other components typically found in conventional implementations of an event dispatcher 120, although such additional modules and other components are omitted from the figure for clarity and simplicity of illustration.
In the
The term “processing platform” as used herein is intended to be broadly construed so as to encompass, by way of illustration and without limitation, multiple sets of processing devices and associated storage systems that are configured to communicate over one or more networks. For example, distributed implementations of the system 100 are possible, in which certain components of the system reside in one data center in a first geographic location while other components of the system reside in one or more other data centers in one or more other geographic locations that are potentially remote from the first geographic location. Thus, it is possible in some implementations of the system 100 for different instances or portions of one or more of the event sources 110, event dispatcher 120 and/or message consumers 150 to reside in different data centers. Numerous other distributed implementations of the components of the information processing system 100 are possible.
As noted above, the exemplary message consumer 150-P can have an associated database 170 where the message consumer 150-P can store the messages that are published to the sequential message queue 105. Although the published messages are stored in the example of
The database 170 in the present embodiment is implemented using one or more storage systems. Such storage systems can comprise any of a variety of different types of storage including network-attached storage (NAS), storage area networks (SANs), direct-attached storage (DAS) and distributed DAS, as well as combinations of these and other storage types, including software-defined storage.
Also associated with one or more of the event sources 110, event dispatcher 120, and/or message consumers 150 can be one or more input/output devices (not shown), which illustratively comprise keyboards, displays or other types of input/output devices in any combination. Such input/output devices can be used, for example, to support one or more user interfaces to one or more components of the information processing system 100, as well as to support communication between the components of the information processing system 100 and/or other related systems and devices not explicitly shown.
The memory of one or more processing platforms illustratively comprises random access memory (RAM), read-only memory (ROM) or other types of memory, in any combination. The memory and other memories disclosed herein may be viewed as examples of what are more generally referred to as “processor-readable storage media” storing executable computer program code or other types of software programs.
One or more embodiments include articles of manufacture, such as computer-readable storage media. Examples of an article of manufacture include, without limitation, a storage device such as a storage disk, a storage array or an integrated circuit containing memory, as well as a wide variety of other types of computer program products. The term “article of manufacture” as used herein should be understood to exclude transitory, propagating signals. These and other references to “disks” herein are intended to refer generally to storage devices, including solid-state drives (SSDs), and should therefore not be viewed as limited in any way to spinning magnetic media.
It is to be understood that the particular set of elements shown in
Generally, an event records the fact that something has happened, typically with respect to an operation of one of the event sources 110. When the sequential message queue 105 is implemented, for example, as a distributed event streaming platform, data is read and written in the form of events. An event typically has a key, a value, a timestamp, and optional metadata. Producers are those services or applications that publish (e.g., write) events to the sequential message queue 105, and consumers are those services or applications that subscribe to (e.g., read and process) such published events from the sequential message queue 105.
In at least some embodiments, the event dispatcher 325 processes (i) configuration messages, for example, following a user selection of designated options (such as available software testing checkers) to maintain, for example, a mapping of incoming event information to relevant topics on a message queue, and (ii) event messages, such as state change messages, generated by event sources that are posted by the event dispatcher to topics on the message queue, based on the mapping, and are consumed from the message queue by one or more interested services, such as software testing checkers.
The event dispatcher 325 may publish the event information 330, such as state information, from one or more of the event messages 320 to one or more input topics 345-1 through 345-N, collectively referred to herein as input topics 345, associated with a sequential message queue 340. For example, the event dispatcher 325 may provide the event information 330 to the sequential message queue 340, such as the sequential message queue 105 of
In the example of
In addition, one or more of the message processors 350 publish event information 360, such as state information, as one or more additional messages to one or more output topics 375-1 through 375-N, collectively referred to herein as output topics 375, associated with a sequential message queue 370. The sequential message queue 340 and the sequential message queue 370 may be implemented as the same message queue in at least some embodiments with different topics (e.g., input topics 345 on the sequential message queue 340 and output topics 375 on the sequential message queue 370).
In the
In this manner, a cascading of event information 330, 360, such as state information, occurs among message processors 350, 380, and may result in a cascading stream of actions by such message processors 350, 380. The message processors 350, 380 may consume messages from and/or publish messages to one or more sequential message queues.
Each service 410 may have a respective one of a plurality of queue managers 414-1 through 414-R (collectively, queue managers 414) and a respective one of a plurality of in-memory caches 418-1 through 418-R (collectively, in-memory caches 418). In one or more embodiments, each service 410 is identified by a corresponding key. For example, a given service, such as service 410-1, may be implemented as a software testing checker that performs a scan of software and updates its respective portion of the state information in a respective in-memory cache 418-1 and then updates the entire state as a state change event 420. In this manner, each service 410 may maintain its respective in-memory cache 418 as a local cache of the state information and may update the local cache, or portions thereof, in response to updated state information received from the sequential message queue 430. In addition, in response to a change in the state information of service 410-1, the producer service 410-1 publishes the updated state information of the service 410-1 as a state change event 420, and the respective queue manager 414 of one or more consumer services 410-2 through 410-R adds and/or removes state information in the respective in-memory cache 418 based on the changes in the state information, as discussed hereinafter.
As shown in
In some embodiments, the queue managers 414 and 424 within each service monitor the status of ongoing events (e.g., periodically) for a completion of the ongoing events, and may update other services. For example, if a given event (e.g., a software scan) completes, the given event may be removed from the queue.
The event dispatcher 422 may publish the state change event 428 (e.g., a duplicate or a processed version of the state change event 420) or other state information to one or more topics 440-1 through 440-S, collectively referred to herein as topics 440, associated with the sequential message queue 430. For example, the event dispatcher 422 may provide the state change event 428 to the sequential message queue 430, for example, implemented in a similar manner as the sequential message queue 105 of
In the example of
In the example of
In the example of
The term “software testing checker” as used herein shall be broadly construed so as to encompass any software function or other event-driven entity that evaluates software, such as a static software scanner and/or a dynamic software scanner. The results from one or more software testing checkers may be evaluated in connection with a policy. Exemplary events associated with the software may comprise, for example, software push events, such as a software build request, a software pull request and/or a software deployment request, or other events that transition software from one stage to another (e.g., a software development stage to a software deployment stage).
In some embodiments, the given service comprises a processor-based software testing checker that scans software (for example, using dynamic software scans and/or static software scans). The one or more topics of a message queue may serve as a state store. The state information may comprise a current configuration and/or a current state of software, a system and/or an entity.
In one or more embodiments, the state information may comprise dynamic information for multiple services (or other entities), and a given service (or other entity) may update its respective portion of the state information in the local cache of the given service and publish the updated state information (comprising the dynamic information for the multiple services or other entities) to one or more topics on the message queue from the local cache. One or more of the multiple services may maintain a respective local cache of the state information and update the respective local cache, or a portion thereof, in response to updated state information received from the message queue. The update to the respective local cache, or the portion thereof, in response to the updated state information received from the message queue may be implemented, at least in part, by a queue manager associated with the respective local cache.
The particular processing operations and other network functionality described in conjunction with
The disclosed techniques for management of state information using a message queue can be employed, for example, to process (i) configuration messages that specify a configuration of a software, a system and/or an entity, such as a software testing checker, and/or (ii) state information messages generated by services, such as software scan results of such software testing checkers, that are posted to one or more topics on the message queue and are consumed from the message queue by one or more interested additional services.
In addition, interested services (or other entities) may subscribe to one or more applicable topics on the message queue to obtain messages with state information updates and each interested service (or other entity) can update its respective local cache based upon the received changes to the state information. The message queue provides a notification mechanism that allows for a quick retrieval of relevant state information with a latency that is comparable to other in-memory cache mechanisms.
One or more embodiments of the disclosure provide improved methods, apparatus and computer program products for management of state information using a message queue. The foregoing applications and associated embodiments should be considered as illustrative only, and numerous other embodiments can be configured using the techniques disclosed herein, in a wide variety of different applications.
It should also be understood that the disclosed event-based state information management techniques, as described herein, can be implemented at least in part in the form of one or more software programs stored in memory and executed by a processor of a processing device such as a computer. As mentioned previously, a memory or other storage device having such program code embodied therein is an example of what is more generally referred to herein as a “computer program product.”
The disclosed techniques for management of state information using a message queue may be implemented using one or more processing platforms. One or more of the processing modules or other components may therefore each run on a computer, storage device or other processing platform element. A given such element may be viewed as an example of what is more generally referred to herein as a “processing device.”
As noted above, illustrative embodiments disclosed herein can provide a number of significant advantages relative to conventional arrangements. It is to be appreciated that the particular advantages described above and elsewhere herein are associated with particular illustrative embodiments and need not be present in other embodiments. Also, the particular types of information processing system features and functionality as illustrated and described herein are exemplary only, and numerous other arrangements may be used in other embodiments.
In these and other embodiments, compute services can be offered to cloud infrastructure tenants or other system users as a PaaS offering, although numerous alternative arrangements are possible.
Some illustrative embodiments of a processing platform that may be used to implement at least a portion of an information processing system comprise cloud infrastructure including virtual machines implemented using a hypervisor that runs on physical infrastructure. The cloud infrastructure further comprises sets of applications running on respective ones of the virtual machines under the control of the hypervisor. It is also possible to use multiple hypervisors each providing a set of virtual machines using at least one underlying physical machine. Different sets of virtual machines provided by one or more hypervisors may be utilized in configuring multiple instances of various components of the system.
These and other types of cloud infrastructure can be used to provide what is also referred to herein as a multi-tenant environment. One or more system components such as a cloud-based event-based state information management engine, or portions thereof, are illustratively implemented for use by tenants of such a multi-tenant environment.
Cloud infrastructure as disclosed herein can include cloud-based systems. Virtual machines provided in such systems can be used to implement at least portions of a cloud-based event-based state information management platform in illustrative embodiments. The cloud-based systems can include object stores.
In some embodiments, the cloud infrastructure additionally or alternatively comprises a plurality of containers implemented using container host devices. The containers may run on virtual machines in a multi-tenant environment, although other arrangements are possible. The containers may be utilized to implement a variety of different types of functionality within the storage devices. For example, containers can be used to implement respective processing devices providing compute services of a cloud-based system. Again, containers may be used in combination with other virtualization infrastructure such as virtual machines implemented using a hypervisor.
Illustrative embodiments of processing platforms will now be described in greater detail with reference to
The cloud infrastructure 700 further comprises sets of applications 710-1, 710-2, . . . 710-L running on respective ones of the VMs/container sets 702-1, 702-2, . . . 702-L under the control of the virtualization infrastructure 704. The VMs/container sets 702 may comprise respective VMs, respective sets of one or more containers, or respective sets of one or more containers running in VMs.
In some implementations of the
In other implementations of the
As is apparent from the above, one or more of the processing modules or other components of system 100 may each run on a computer, server, storage device or other processing platform element. A given such element may be viewed as an example of what is more generally referred to herein as a “processing device.” The cloud infrastructure 700 shown in
The processing platform 800 in this embodiment comprises at least a portion of the given system and includes a plurality of processing devices, denoted 802-1, 802-2, 802-3, . . . 802-K. which communicate with one another over a network 804. The network 804 may comprise any type of network, such as a WAN, a LAN, a satellite network, a telephone or cable network, a cellular network, a wireless network such as WiFi or WiMAX, or various portions or combinations of these and other types of networks.
The processing device 802-1 in the processing platform 800 comprises a processor 810 coupled to a memory 812. The processor 810 may comprise a microprocessor, a microcontroller, an ASIC, an FPGA or other type of processing circuitry, as well as portions or combinations of such circuitry elements, and the memory 812, which may be viewed as an example of a “processor-readable storage media” storing executable program code of one or more software programs.
Articles of manufacture comprising such processor-readable storage media are considered illustrative embodiments. A given such article of manufacture may comprise, for example, a storage array, a storage disk or an integrated circuit containing RAM, ROM or other electronic memory, or any of a wide variety of other types of computer program products. The term “article of manufacture” as used herein should be understood to exclude transitory, propagating signals. Numerous other types of computer program products comprising processor-readable storage media can be used.
Also included in the processing device 802-1 is network interface circuitry 814, which is used to interface the processing device with the network 804 and other system components, and may comprise conventional transceivers.
The other processing devices 802 of the processing platform 800 are assumed to be configured in a manner similar to that shown for processing device 802-1 in the figure.
Again, the particular processing platform 800 shown in the figure is presented by way of example only, and the given system may include additional or alternative processing platforms, as well as numerous distinct processing platforms in any combination, with each such platform comprising one or more computers, storage devices or other processing devices.
Multiple elements of an information processing system may be collectively implemented on a common processing platform of the type shown in
For example, other processing platforms used to implement illustrative embodiments can comprise different types of virtualization infrastructure, in place of or in addition to virtualization infrastructure comprising virtual machines. Such virtualization infrastructure illustratively includes container-based virtualization infrastructure configured to provide Docker containers or other types of LXCs.
As another example, portions of a given processing platform in some embodiments can comprise converged infrastructure.
It should therefore be understood that in other embodiments different arrangements of additional or alternative elements may be used. At least a subset of these elements may be collectively implemented on a common processing platform, or each such element may be implemented on a separate processing platform.
Also, numerous other arrangements of computers, servers, storage devices or other components are possible in the information processing system. Such components can communicate with other elements of the information processing system over any type of network or other communication media.
As indicated previously, components of an information processing system as disclosed herein can be implemented at least in part in the form of one or more software programs stored in memory and executed by a processor of a processing device. For example, at least a portion of the functionality shown in one or more of the figures are illustratively implemented in the form of software running on one or more processing devices.
It should again be emphasized that the above-described embodiments are presented for purposes of illustration only. Many variations and other alternative embodiments may be used. For example, the disclosed techniques are applicable to a wide variety of other types of information processing systems. Also, the particular configurations of system and device elements and associated processing operations illustratively shown in the drawings can be varied in other embodiments. Moreover, the various assumptions made above in the course of describing the illustrative embodiments should also be viewed as exemplary rather than as requirements or limitations of the disclosure. Numerous other alternative embodiments within the scope of the appended claims will be readily apparent to those skilled in the art.