The present application generally relates to real-time management of data center compute load to maximize value of power economics and supply. In particular this application describes the ability to adjust the compute capacity, and therefore power usage, to facilitate balancing of power supply and demand, and the ability to economically maximize utilization of intermittent power source like wind or solar.
The present disclosure, in accordance with one or more various embodiments, is described in detail with reference to the following figures. The figures are provided for purpose of illustration only and merely depict typical or example embodiments.
These illustrative embodiments are mentioned not to limit or define the disclosure, but to provide examples to aid understanding thereof. Additional embodiments are discussed in the Detailed Description, and further description is provided there.
Power optimization generally involves techniques and strategies employed to manage power consumption of electronic devices and/or systems. Power optimization is crucial in various domains, including mobile devices, embedded systems, data centers, and energy-constrained environments. Overall, power optimization techniques focus on balancing performance requirements with power consumption to reduce energy costs, improve overall power efficiency, and promote sustainability in electronic systems. A particular technique that is used for power optimization in the realm of computing is dynamic power management. Dynamic power management can include dynamically adjusting the power consumption of different systems and/or resources within the data center based on the workload, demand, and other environmental conditions. Dynamic power management in data centers aims to strike a balance between performance, resource utilization, and power consumption. There are several different approaches used in industry to perform dynamic power management, including: workload consolidation; dynamic resource provisioning; power capping; energy-aware scheduling; dynamic cooling; and energy monitoring and management.
However, many dynamic power management systems that are currently utilized in industry do not have the capability to perform continuous and real-time control of the data center and thus cannot achieve optimal performance. For instance, there are existing systems that are capable of migrating compute loads to alternate geographies and/or completely curtailing data center usage. However, these existing systems are not able to partially adjust compute load, and related power usage, on a real-time basis and control resources based on state awareness in conjunction with operational feedback from the datacenter. This creates a delay in the control cycle, which can require a substantial period of time to execute follow-on commands necessary to achieve a particular consumption target. This significantly limits the ability of the existing art to respond in real time to intermittent generation, or economically optimize usage to correspond with power supply and demand imbalances. In particular when utilizing intermittent generation such as wind or solar, existing power management systems will issue commands to manage operations and must wait to observe the residual impact that the particular commands have on the site prior to generating the next commands. This time lull substantively limits the efficiency and adaptability in adjusting the operation at the data center, causing power management to not respond effectively to variability in power generation. Due to lack of awareness of the real-time operating usage combined with estimated impact of commands that have been issued but not yet reflected in usage. There is significant risk that the subsequent commands will result in over-control of the data center. This could cause either under-performance of the data-center or attempted over-utilization of available power, resulting in physical damage to resources.
Furthermore, many existing dynamic power management systems are designed for vertical scalability. That is, scalability of the systems is limited to the resources on the server hosting the application. For example, if the number of compute resources to be controlled increases, greater computational demand of the control application would require the management server to be upgraded to accommodate the larger capacity. Thus, as data center sites scale-up, a conventional management system may require more or faster processors, increasing storage capacity, and the like, which limits the ability to expand to the resources of an individual server. Due to this vertically scalable design, existing dynamic power management systems can have significant limitations in terms of costs, resource availability, and their ability to handle extremely large-scale applications.
As disclosed herein, distinct methods and systems may be provided for real-time, scalable, systematic, event-based, model-driven operational value optimization. As will be described in greater detail throughout, the disclosed methods and systems receive a plethora of real-time information that impacts the optimization of operations at the data center. These include, though are not limited to, power-market data (e.g., real-time and day-ahead power prices, power generation, consumption, ancillary services markets, etc.), weather-based data that may impact performance of the data center's resources and/or the power sources/supply, market-based data relating to revenue that may be derived from the data center's operation, real-time operational feedback from the data center, and stochastic modeling to understand correlations and future predictions of data sources. Various embodiments may apply some or all of this plurality of real-time data to powerful decision-making logic which analyzes various factors gleaned from the real-time information in order to determine a dynamic value optimization for operating the data center. Embodiments may also contain the ability to adaptively command operation of the data center (e.g., full curtailment of power and/or compute load, partial curtailment of power and/or compute load, etc.) in a manner that dynamically and in real time optimizes operations. Furthermore, embodiments may provide a solution that has modularly designed logic and a horizontally designed framework to efficiently support a plethora of applications, even at large-scale.
In the example of
The electrical grid 125 is a complex network of interconnected power generation plants, transmission lines, distribution systems, and consumers. The electrical grid 125 is responsible for delivering electricity from producers, such as power plants, to the consumers that can include homes, businesses, and other end-users. As alluded to above, the front-of-meter (FOM) site 121, behind-the-meter (B™) site 122, and hybrid site 123 can be connected to the renewable energy producers 124 and/or the electrical grid 125 and/or non-renewable energy producers (not shown in
In the example of
In
In the example of
The FOM site 151 can be a resource site where energy is sourced directly from the power grid. The FOM site 151 is subject to metered consumption tariff which can include fluctuating real-time pricing, transmission charges, and other related fees. FOM sites, such as FOM site 151, can also participate in other hedging or revenue generating schemes offered by the grid provider, such pre-purchasing power on a day-ahead basis or selling the curtailment optionality.
The B™ site 152 can be a co-located resource site where power is sourced directly from the generation asset and/or energy storage mechanism. The generation asset may be an intermittent or renewable source such as wind or solar or a traditional thermal source. In other words, the B™ site 152 can operate independent of the utility grid and is directly connected to the producers' generation facility and electrical infrastructure.
The hybrid site 153 is a combination of a FOM site 151 and a B™ site 152, whereas a site with B™ also has a grid connection to offer source optionality and increased availability of power.
In the example of
The ROVO system 130, in accordance with various embodiments, combines control, data, and insight to maximize the operational value optimization of the resource site 150 by dynamically adjusting the power and compute load in real time. For example, if the ROVO system 130 determines that it is not profitable for the resource site 150 to currently operate, as a current spike in power costs outweighs the revenue from the compute, then the ROVO system 130 has the capability to issue a real-time control 140 to the resource site 150 which curtail the power and/or compute load at the site 150 until a later time when the power costs are lower. As a general description, the ROVO system 130 can receive real-time data streamed from the power sources 120 that may contribute to operational value optimization, such as wind turbine data, weather data, production data, energy market data, and the like. The ROVO system 130 may be configured with logic 131 that can continuously monitor, model, and analyze this streamed data in real time to determine a compute and/or power utilization target, and send controls 140 in real time to coordinate the resources at the resource site 150 in a manner to reach the target and achieve operational value optimization. It should be appreciated that although the embodiments and the example of
The input subsystem 210 may be configured to receive multiple streams of real-time data from a plurality of sources that are associated with the function of the data center, for instance different power sources that can supply electricity and/or renewable energy (e.g., wind, solar, etc.) to the data center. In the example of
The optimization subsystem 220 may be configured to implement the business logic 221 which determines a value optimization for operation of the data center and generate controls 222 which govern the operation of the data center in real time in accordance with the determined value optimization. According to various embodiments, the business logic 221 may be configured to determine value optimization through real-time modeling, and portfolio (e.g., asset) optimization. The business logic 221 implements a modular and sophisticated decision tree given the real-time inputs 211-215 that have been fed into the system 200, where the inputs 211-215 can be analyzed in a manner that enables the system 200 to have awareness of the current operational condition of the data center and various factors that may directly impact the efficiency and/or optimization of the data center's operation, such as power supply availability, power costs (e.g., actual, predicted), and potential revenue from the compute load and/or sale of power or power-related products. Greater detail regarding the functionality of decision-making aspects of the business logic 221 is described in reference to
As an example, the business logic 221 can analyze power markets and a cost associated with executing its compute load to determine an optimized operation, for instance temporarily curtailing the compute load while the grid is at peak usage (e.g., associated with substantively high electricity costs) until the grid usage is reduced within a range with lower costs (e.g., predetermined power cost threshold). In another example, the business logic 221 may be configured to receive signals/messages that may drive a discrete decision, such as an emergency message from an electrical power provider or grid operator that electricity must be turned off in order to protect the grid, which would then drive a decision to fully curtail the compute load and/or power down the data center.
Additionally, the business logic 221 may be configured to generate and utilize stochastic modeling of real-time data (e.g., inputs 211-215) in order to derive the current operational efficiency of the data center and further extrapolate a predicted optimal operation for the data center in a dynamic manner, for instance updating the value optimization of the data center per-second. For example, the business logic 221 can train and apply a stochastic model to provide predictions for future available wind generation capacity, based on modeling and analyzing real-time data that impacts operation of the wind farm, such as weather data, wind speed, and production data from the wind farm. Thus, the business logic 221 can leverage the stochastic model which has been trained on these types of data over time, in order to predict availability related to the power source, and predictively manage the compute load in-line with the availability of power sourced from the wind farm. Moreover, a key feature of embodiments of the system 200 is its ability to receive feedback from the data centers in real time, which enables a re-training of models based on the most current operational state of the data center. As a result, the business logic 221 can fine-tune its models by continuously capturing and analyzing information that is fed back from the data center which monitors the impact that the dynamic adjustments issued from the system 200 have on the data center actually realizing optimization (e.g., previous commands causing the data center to diverge from value optimization) and re-calibrate the models if necessary.
Additionally, the business logic 221 can further leverage stochastic models to derive and analyze various probabilities that may impact a value optimization of the data center's operation. As an example, the business logic 221 can determine a probability of a short-term drastic increase (e.g., spike) in power price within a determined time period, which allows the probability of a potential power price spike to be a factor that is considered in setting thresholds for value optimization. Its operation, for instance curtailing the compute load in the case of a high probability of a spike, can be within the logic's 221 decision tree. The business model 221 can also use stochastic modeling in order to consider a predicted impact that modifying the data center's operation may have on value optimization. For instance, there is a cost associated with curtailing the compute load at the data center, such as potential lost revenue from not executing the compute, lost efficiency/production associated with the data center, cost associated with restoring the data center and/or compute load after curtailment, and the like. The business logic 221 may be configured with stochastic models that can statistically make these types of determinations related to modifying the data center's operation, such as an amount of time needed to recuperate the costs in the event of data center and/or compute load curtailment. In this example, the business logic 221 may determine that, even if power costs are higher, that it is statically not optimal to curtail a compute load due to a significantly high cost of curtailment and a lengthy process to recover from those losses from curtailing the compute load.
In addition, the business logic 221 may be configured to perform portfolio optimization. The system 200 may manage a plurality of different elements that are considered assets (e.g., having financial and/or physical value), such as multiple data centers (and its resources), power markets, ancillary services markets, financial assets, and the like. The business logic 221 has the capability to track, analyze, correlate, and dynamically manage the portfolio of assets, for instance several data center sites, in order to achieve value optimization.
The optimization subsystem 220 also implements the dynamic generation of controls 222 which can effectuate an adjustment of the data center's operation substantively in real time. Based on value optimization determinations and decisions regarding particular modification/adjustments to the operation of the data center that are made by the business logic 221, the optimization subsystem 220 generates corresponding controls 222 which command the functions of the data center to execute the decisions (e.g., curtailment, restore) in a manner that achieves the determined value optimization. The controls 222 can command the function of various elements of the data center in real time, such as the compute, power, system/infrastructure components, and environmental elements in order to ultimately optimize the data center efficiency. For example, controls 222 can include functions such as curtailment of the compute load and/or power, partial curtailment of the compute load and/or power (e.g., variable curtailment in a range between full operation and full curtailment), restore compute load and/or power, and partial restore (e.g., variable restore in a range between curtailment and fully restored). In an embodiment, the optimization subsystem 220 can generate new controls 222, which allows for a continuous fine-tuning of the data center's operation, which thereby enables granular optimization of the data center. In some cases, the controls 222 can include commands to the data center's infrastructure components, for instance increasing the speed of exhaust fans (e.g., cooling the servers), in a manner that improves the performance of the data center's resources to achieve optimization. Furthermore, according to various embodiments, controls 222 can be generated for each resource (e.g., server) that is located within a particular data center. That is, the system 200 may include the capability to simultaneously interact with and control a plurality of data centers and their respective resources across multiple sites (e.g., remote and/or co-located) at an extremely large scale (e.g., upwards of 100,000 servers).
The output subsystem 230 may be configured to generate and disseminate various outputs 231 from the system 200 to connected entities, such as the data centers. Examples of outputs 231 that can be communicated by the output subsystem 230 include but are not limited to: compute/hardware commands; horizontal scalability; distributed management; continuous control (e.g., state management); power optimization strategies; asset optimization strategies; model feedback loop (e.g., calibration); real-time metrics and monitoring; and the like. In an embodiment, the controls 222 are communicated as output 231 using command control signal(s) transmitted to data centers across a distributed communication network (e.g., wide-area network). The outputs 231 are generated by the ROVO system 200 in a manner that ultimately achieves and continuously maintains an optimal run state of the data center (in accordance with the value optimization and operational decisions from the business logic 221).
Generally, process 400 implements a decision-making logic that may be optimal for a FOM site, according to some embodiments. In a FOM site scenario, there may be several goals that govern the logic in process 400, including, for example: the site should only operate when profitable; revenue should be enhanced through participation in day-ahead markets; ancillary services should be monetized through rapid curtailment at the request of the grid operator; and to avoid coincidental demand charges. That is, the process 400 implements a logic that principally endeavors to curtail in order to capture excess revenue by selling the power back to the market from existing forward purchases or otherwise monetizing grid balancing programs such as ancillary services, as opposed to curtailing to avoid costs. Particularly, process 400 may be configured to optimize dollar profitability for grid connected sites.
Process 400 can begin at operation 405, where it is determined whether an external indicator is received to curtail. In some cases, the option to curtail is sold to an external party as a means of revenue. Therefore, curtailment of the data center's power and/or compute is under external control. If operation 405 determines that an external indicator to curtail was received (shown as “Yes” in
Next, at operation 415, a check is performed to determine if a cost of power plus a down buffer is greater than the revenue from compute. According to various embodiments, the down buffer and the up buffer are set via a stochastic model on “cost to curtail” and predictions of power and market volatility. The down buffer and the up buffer are values generated by a data science model that streams real-time projections of the time needed to recover costs to curtail based on the volatility of power and magnitude of revenue loss or gain. If the cost of power added to (+) a down buffer is greater than the revenue from compute (shown as “Yes” in
Thereafter, operation 420 involves determining whether the cost of power minus (−) the up buffer is less than the revenue from compute. In the case where the cost of power minus the up buffer is less than the revenue from compute (shown as “Yes” in
Process 500 can be described as implementing a decision-making logic that may be optimal for a B™ site, according to some embodiments. A B™ site scenario may have goals (e.g., differing from the goals of the previously described FOM site scenario of
In
At operation 515, the determination is made whether the market price minus (−) the up buffer is less than the revenue from compute. When operation 515 determines that the market price minus (−) the up buffer is less than the revenue from compute, then the process 500 curtails the data center at operation 520. Alternatively, the process 500 continues with the current operation of the data center and moves to the next operation 525 when it is determined in operation 515 that the market price minus (−) the up buffer is not less than (e.g., greater than, equal) the revenue from compute (shown in
Subsequently, operation 525 involves another check to determine whether a current usage is greater than a forecasted power production. In the example, operation 525 uses a modeled 45 sec. ahead production. The process 500 then proceeds to perform curtailment to the 45 sec. target at operation 530, when the current usage is greater than the modeled 45 sec. ahead production (shown as “Yes” in
Referring back to operation 525, if is determined that the current usage is not greater than (e.g., less than, equal) a modeled 45 sec. ahead production (shown as “No” in
Next, operation 535 determines whether the current usage is less than a modeled 6 minute ahead production. In the case where the current usage is less than a modeled 6 minute ahead production (shown as “Yes” in
Process 600 can be described as implementing a decision-making logic that may be optimal for a hybrid site, according to some embodiments. As previously alluded to, a hybrid site can have aspects and challenges that are a combination of FOM sites and BOM sites. Thus, a hybrid site scenario may have its own unique optimization goals (e.g., differing from the goals of the previously described FOM site scenario of
At operation 615, a determination is made as to whether both the price to sell power to capture revenue and also the price to purchase additional power from the grid in excess of the contracted amount, plus (+) the down buffer is greater than the revenue from compute, when the data center is currently operating. If operation 615 determines that the power purchase and sales prices plus (+) the down buffer is greater than the revenue from compute (shown in
Subsequently, at operation 625, a next conditional check is performed to determine whether the current usage is greater than a contract or available production amount. When the current usage is greater than a contract or available production amount (shown in
Thereafter, at operation 630, the process 600 determines whether the power purchase price plus (+) the down buffer is greater than revenue from the compute. If the purchase price plus (+) the down buffer is greater than revenue from the compute (shown in
Then, operation 640 checks whether the current usage is less than a contracted or available production amount. If the current usage is less than a contracted or available production amount (shown as “Yes” in
At operation 645, another conditional check is performed to determine whether the (e.g., power purchase price minus (−) the up buffer is less than the revenue from compute. When operation 645 determines that the purchase price minus (−) the up buffer is less than the revenue from compute (shown in
In operation 655, it is determined if the power purchase and sales prices minus (−) the up buffer is less than the revenue from compute. In the case where both the purchase and sales prices minus (−) the up buffer is less than the revenue from compute (shown as “Yes” in
The computer system 700 also includes a main memory 706, such as a random-access memory (RAM), cache and/or other dynamic storage devices, coupled to bus 702 for storing information and instructions to be executed by processor 704. Main memory 706 also may be used for storing temporary variables or other intermediate information during execution of instructions to be executed by processor 704. Such instructions, when stored in storage media accessible to processor 704, render computer system 700 into a special-purpose machine that is customized to perform the operations specified in the instructions.
The computer system 700 further includes a read only memory (ROM) 708 or other immutable storage device coupled to bus 702 for storing static information and instructions for processor 704. A storage device 710, such as a magnetic disk, optical disk, or USB thumb drive (Flash drive), message topic, etc., is provided and coupled to bus 702 for storing information and instructions.
The computer system 700 may be coupled via bus 702 to a display 712, such as a liquid crystal display (LCD) (or touch screen), for displaying information to a computer user. An input device 714, including alphanumeric and other keys, is coupled to bus 702 for communicating information and command selections to processor 704. Another type of user input device is cursor control 716, such as a mouse, a trackball, or cursor direction keys for communicating direction information and command selections to processor 704 and for controlling cursor movement on display 712. In some embodiments, the same direction information and command selections as cursor control may be implemented via receiving touches on a touch screen without a cursor.
The computing system 700 may include a user interface module to implement a GUI that may be stored in a mass storage device as executable software codes that are executed by the computing device(s). This and other modules may include, by way of example, components, such as software components, object-oriented software components, class components and task components, processes, functions, attributes, procedures, subroutines, segments of program code, drivers, firmware, microcode, circuitry, data, databases, data structures, tables, arrays, and variables.
In general, the word “component,” “engine,” “system,” “database,” “data store,” and the like, as used herein, can refer to logic embodied in hardware or firmware, or to a collection of software instructions, possibly having entry and exit points, written in a programming language, such as, for example, Java, C or C++. A software component may be compiled and linked into an executable program, installed in a dynamic link library, or may be written in an interpreted programming language such as, for example, BASIC, Perl, or Python. It will be appreciated that software components may be callable from other components or from themselves, and/or may be invoked in response to detected events or interruptions. Software components configured for execution on computing devices may be provided on a computer readable medium, such as a compact disc, digital video disc, flash drive, magnetic disc, or any other tangible medium, or as a digital download (and may be originally stored in a compressed or installable format that requires installation, decompression, or decryption prior to execution). Such software code may be stored, partially or fully, on a memory device of the executing computing device, for execution by the computing device. Software instructions may be embedded in firmware, such as an EPROM. It will be further appreciated that hardware components may be comprised of connected logic units, such as gates and flip-flops, and/or may be comprised of programmable units, such as programmable gate arrays or processors.
The computer system 700 may implement the techniques described herein using customized hard-wired logic, one or more ASIC FPGA chips, firmware and/or program logic which in combination with the computer system causes or programs computer system 600 to be a special-purpose machine. According to one embodiment, the techniques herein are performed by computer system 700 in response to processor(s) 704 executing one or more sequences of one or more instructions contained in main memory 706. Such instructions may be read into main memory 706 from another storage medium, such as storage device 710. Execution of the sequences of instructions contained in main memory 706 causes processor(s) 704 to perform the process steps described herein. In alternative embodiments, hard-wired circuitry may be used in place of or in combination with software instructions.
The term “non-transitory media,” and similar terms, as used herein refers to any media that store data and/or instructions that cause a machine to operate in a specific fashion. Such non-transitory media may comprise non-volatile media and/or volatile media. Non-volatile media includes, for example, optical or magnetic disks, such as storage device 710. Volatile media includes dynamic memory, such as main memory 706. Common forms of non-transitory media include, for example, a floppy disk, a flexible disk, hard disk, solid state drive, magnetic tape, or any other magnetic data storage medium, a CD-ROM, any other optical data storage medium, any physical medium with patterns of holes, a RAM, a PROM, and EPROM, a FLASH-EPROM, NVRAM, any other memory chip or cartridge, and networked versions of the same.
Non-transitory media is distinct from but may be used in conjunction with transmission media. Transmission media participates in transferring information between non-transitory media. For example, transmission media includes coaxial cables, copper wire and fiber optics, including the wires that comprise bus 702. Transmission media can also take the form of acoustic or light waves, such as those generated during radio-wave and infra-red data communications.
The computer system 700 also includes a communication interface 718 coupled to bus 702. Communication interface 718 provides a two-way data communication coupling to one or more network links that are connected to one or more local networks. For example, communication interface 718 may be an integrated services digital network (ISDN) card, cable modem, satellite modem, or a modem to provide a data communication connection to a corresponding type of telephone line. As another example, communication interface 618 may be a local area network (LAN) card to provide a data communication connection to a compatible LAN (or wide area network (WAN) component to communicate with a WAN). Wireless links may also be implemented. In any such implementation, communication interface 718 sends and receives electrical, electromagnetic, or optical signals that carry digital data streams representing various types of information.
A network link typically provides data communication through one or more networks to other data devices. For example, a network link may provide a connection through local network to a host computer or to data equipment operated by an Internet Service Provider (ISP). The ISP in turn provides data communication services through the worldwide packet data communication network now commonly referred to as the “Internet.” Local network and Internet both use electrical, electromagnetic, or optical signals that carry digital data streams. The signals through the various networks and the signals on network link and through communication interface 718, which carry the digital data to and from computer system 700, are example forms of transmission media. The computer system 700 can send messages and receive data, including program code, through the network(s), network link and communication interface 718. In the Internet example, a server might transmit a requested code for an application program through the Internet, the ISP, the local network, and the communication interface 718. The received code may be executed by processor 704 as it is received, and/or stored in storage device 710, or other non-volatile storage for later execution.
Each of the processes, methods, and algorithms described in the preceding sections may be embodied in, and fully or partially automated by, code components executed by one or more computer systems or computer processors comprising computer hardware. The one or more computer systems or computer processors may also operate to support performance of the relevant operations in a “cloud computing” environment or as a “software as a service” (Saas). The processes and algorithms may be implemented partially or wholly in application-specific circuitry. The various features and processes described above may be used independently of one another or may be combined in various ways. Different combinations and sub-combinations are intended to fall within the scope of this disclosure, and certain method or process blocks may be omitted in some implementations. The methods and processes described herein are also not limited to any particular sequence, and the blocks or states relating thereto can be performed in other sequences that are appropriate, or may be performed in parallel, or in some other manner. Blocks or states may be added to or removed from the disclosed example embodiments. The performance of certain of the operations or processes may be distributed among computer systems or computer processors, not only residing within a single machine, but deployed across a number of machines.
As used herein, the term “or” may be construed in either an inclusive or exclusive sense. Moreover, the description of resources, operations, or structures in the singular shall not be read to exclude the plural. Conditional language, such as, among others, “can,” “could,” “might,” or “may,” unless specifically stated otherwise, or otherwise understood within the context as used, is generally intended to convey that certain embodiments include, while other embodiments do not include, certain features, elements and/or steps.
Terms and phrases used in this document, and variations thereof, unless otherwise expressly stated, should be construed as open ended as opposed to limiting. Adjectives such as “conventional,” “traditional,” “normal,” “standard,” “known,” and terms of similar meaning should not be construed as limiting the item described to a given time period or to an item available as of a given time, but instead should be read to encompass conventional, traditional, normal, or standard technologies that may be available or known now or at any time in the future. The presence of broadening words and phrases such as “one or more,” “at least,” “but not limited to” or other like phrases in some instances shall not be read to mean that the narrower case is intended or required in instances where such broadening phrases may be absent.
The present application is a continuation of and claims priority to U.S. patent application Ser. No. 18/349,748, filed Jul. 10, 2023 and titled “OPERATIONAL VALUE OPTIMIZATION FOR DATA CENTER POWER AND COMPUTATIONAL LOAD,” which is incorporated herein by reference in its entirety.
Number | Name | Date | Kind |
---|---|---|---|
8838802 | Van Der Merwe et al. | Sep 2014 | B2 |
9159042 | Steven et al. | Oct 2015 | B2 |
10079760 | Van Der Merwe et al. | Sep 2018 | B2 |
10367353 | McNamara et al. | Jul 2019 | B1 |
10444818 | McNamara et al. | Oct 2019 | B1 |
10452127 | McNamara et al. | Oct 2019 | B1 |
10608433 | McNamara et al. | Mar 2020 | B1 |
10618427 | McNamara et al. | Apr 2020 | B1 |
10857899 | McNamara et al. | Dec 2020 | B1 |
10873211 | McNamara et al. | Dec 2020 | B2 |
11016456 | Henson et al. | May 2021 | B2 |
11016458 | McNamara et al. | May 2021 | B2 |
11016553 | McNamara et al. | May 2021 | B2 |
11025060 | McNamara et al. | Jun 2021 | B2 |
11031783 | McNamara et al. | Jun 2021 | B2 |
11031787 | McNamara et al. | Jun 2021 | B2 |
11031813 | McNamara et al. | Jun 2021 | B2 |
11042948 | McNamara et al. | Jun 2021 | B1 |
11128165 | McNamara et al. | Sep 2021 | B2 |
11163280 | Henson et al. | Nov 2021 | B2 |
11256320 | McNamara et al. | Feb 2022 | B2 |
11275427 | McNamara et al. | Mar 2022 | B2 |
11283261 | McNamara et al. | Mar 2022 | B2 |
11342746 | McNamara et al. | May 2022 | B2 |
11397999 | McNamara et al. | Jul 2022 | B2 |
11431195 | McNamara et al. | Aug 2022 | B2 |
11581734 | McNamara et al. | Feb 2023 | B2 |
11611219 | McNamara et al. | Mar 2023 | B2 |
11650639 | McNamara et al. | May 2023 | B2 |
11669144 | McNamara et al. | Jun 2023 | B2 |
11669920 | McNamara et al. | Jun 2023 | B2 |
11678615 | Henson et al. | Jun 2023 | B2 |
11682902 | McNamara et al. | Jun 2023 | B2 |
11868106 | McNamara et al. | Jan 2024 | B2 |
11949232 | McNamara et al. | Apr 2024 | B2 |
11961151 | McNamara et al. | Apr 2024 | B2 |
12021385 | McNamara et al. | Jun 2024 | B2 |
20090113057 | Van Der Merwe et al. | Apr 2009 | A1 |
20110251730 | Pitt | Oct 2011 | A1 |
20140379863 | Van Der Merwe et al. | Dec 2014 | A1 |
20170336857 | Sun et al. | Nov 2017 | A1 |
20190014046 | Van Der Merwe et al. | Jan 2019 | A1 |
20190354076 | Henson et al. | Nov 2019 | A1 |
20200089307 | McNamara et al. | Mar 2020 | A1 |
20200091717 | McNamara et al. | Mar 2020 | A1 |
20200091727 | McNamara et al. | Mar 2020 | A1 |
20200091766 | McNamara et al. | Mar 2020 | A1 |
20200136387 | McNamara et al. | Apr 2020 | A1 |
20200136388 | McNamara et al. | Apr 2020 | A1 |
20200136432 | McNamara et al. | Apr 2020 | A1 |
20200225726 | McNamara et al. | Jul 2020 | A1 |
20200274388 | McNamara et al. | Aug 2020 | A1 |
20200359572 | Henson et al. | Nov 2020 | A1 |
20200379537 | Henson et al. | Dec 2020 | A1 |
20200409814 | Tiwari | Dec 2020 | A1 |
20210035242 | McNamara | Feb 2021 | A1 |
20210036547 | McNamara et al. | Feb 2021 | A1 |
20210101499 | McNamara et al. | Apr 2021 | A1 |
20210111585 | McNamara et al. | Apr 2021 | A1 |
20210124322 | McNamara et al. | Apr 2021 | A1 |
20210126456 | McNamara et al. | Apr 2021 | A1 |
20210287309 | Gebhardt et al. | Sep 2021 | A1 |
20210288495 | McNamara et al. | Sep 2021 | A1 |
20210288496 | McNamara et al. | Sep 2021 | A1 |
20210294405 | McNamara et al. | Sep 2021 | A1 |
20210296893 | McNamara et al. | Sep 2021 | A1 |
20210296928 | McNamara et al. | Sep 2021 | A1 |
20210312574 | McNamara et al. | Oct 2021 | A1 |
20210325955 | McNamara et al. | Oct 2021 | A1 |
20220050433 | Henson et al. | Feb 2022 | A1 |
20220085603 | McNamara et al. | Mar 2022 | A1 |
20220171449 | McNamara et al. | Jun 2022 | A1 |
20220197363 | McNamara et al. | Jun 2022 | A1 |
20220294219 | McNamara et al. | Sep 2022 | A1 |
20220366517 | McNamara et al. | Nov 2022 | A1 |
20220407350 | McNamara et al. | Dec 2022 | A1 |
20220417170 | Shrestha et al. | Dec 2022 | A1 |
20230121669 | McNamara et al. | Apr 2023 | A1 |
20230178995 | McNamara et al. | Jun 2023 | A1 |
20230185346 | McNamara et al. | Jun 2023 | A1 |
20230187937 | McNamara et al. | Jun 2023 | A1 |
20230208138 | McNamara et al. | Jun 2023 | A1 |
20230228446 | Lee et al. | Jul 2023 | A1 |
20230259192 | McNamara et al. | Aug 2023 | A1 |
20230275432 | McNamara et al. | Aug 2023 | A1 |
20230281732 | McNamara et al. | Sep 2023 | A1 |
20230284570 | Henson et al. | Sep 2023 | A1 |
20230394602 | Kim | Dec 2023 | A1 |
20230420940 | McNamara et al. | Dec 2023 | A1 |
20240134333 | McNamara et al. | Apr 2024 | A1 |
20240144399 | McNamara et al. | May 2024 | A1 |
Number | Date | Country |
---|---|---|
2019338540 | Apr 2021 | AU |
2019338541 | Apr 2021 | AU |
2019339494 | Apr 2021 | AU |
2019339498 | Apr 2021 | AU |
2020372976 | May 2022 | AU |
2021228704 | Oct 2022 | AU |
112021004349 | May 2021 | BR |
112021004358 | May 2021 | BR |
112022008086 | Sep 2022 | BR |
3088184 | Jul 2019 | CA |
3111583 | Mar 2020 | CA |
3111830 | Mar 2020 | CA |
3112033 | Mar 2020 | CA |
3112037 | Mar 2020 | CA |
3118128 | May 2020 | CA |
3118219 | May 2020 | CA |
3126390 | Jul 2020 | CA |
3128478 | Sep 2020 | CA |
3145483 | Feb 2021 | CA |
3145486 | Feb 2021 | CA |
3156426 | May 2021 | CA |
3167946 | Sep 2021 | CA |
3128478 | May 2022 | CA |
103530801 | Jan 2014 | CN |
103854062 | Jun 2014 | CN |
104158754 | Jul 2017 | CN |
107482766 | Dec 2017 | CN |
110308991 | Jun 2020 | CN |
112106051 | Dec 2020 | CN |
113056716 | Jun 2021 | CN |
113196201 | Jul 2021 | CN |
113196201 | Jul 2021 | CN |
115330015 | Nov 2022 | CN |
115800390 | Mar 2023 | CN |
3738014 | Nov 2020 | EP |
3850461 | Jul 2021 | EP |
3850462 | Jul 2021 | EP |
3850462 | Jul 2021 | EP |
3850463 | Jul 2021 | EP |
3850465 | Jul 2021 | EP |
3874349 | Sep 2021 | EP |
3874578 | Sep 2021 | EP |
3894989 | Oct 2021 | EP |
3738014 | Jan 2022 | EP |
3850463 | May 2022 | EP |
3850465 | Jun 2022 | EP |
4007987 | Jun 2022 | EP |
4008046 | Jun 2022 | EP |
3874349 | Jul 2022 | EP |
3874578 | Aug 2022 | EP |
3894989 | Sep 2022 | EP |
4052109 | Sep 2022 | EP |
4111568 | Jan 2023 | EP |
4007987 | Aug 2023 | EP |
4008046 | Aug 2023 | EP |
4052109 | Dec 2023 | EP |
4060457 | Dec 2023 | EP |
3850465 | May 2024 | EP |
2597342 | Jan 2022 | GB |
WO2019139632 | Jul 2019 | WO |
WO2019139633 | Jul 2019 | WO |
WO2020056296 | Mar 2020 | WO |
WO2020056308 | Mar 2020 | WO |
WO2020056319 | Mar 2020 | WO |
WO2020056322 | Mar 2020 | WO |
WO2020092627 | May 2020 | WO |
WO2020092627 | May 2020 | WO |
WO2020092628 | May 2020 | WO |
WO2020146875 | Jul 2020 | WO |
WO2020176486 | Sep 2020 | WO |
WO2021022174 | Feb 2021 | WO |
WO2021022175 | Feb 2021 | WO |
WO2021086930 | May 2021 | WO |
WO2021173973 | Sep 2021 | WO |
Entry |
---|
Qureshi, Asfandyar, “Power-Demand Routing in Massive Geo-Distributed Systems,” Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology: Archives Libraries, Sep. 2010, Published Apr. 2011, 171 pages. |
International Search Report and Written Opinion dated Oct. 30, 2024 for International Application No. PCT/US2024/037218, filed on Jul. 9, 2024. |
Number | Date | Country | |
---|---|---|---|
Parent | 18349748 | Jul 2023 | US |
Child | 18733588 | US |