This patent arises from a US National Phase entry of International Patent Application Serial No. PCT/US2008/067447, filed Jun. 19, 2008, which is hereby incorporated by reference herein in its entirety.
Data centers, which may include any collection of computing and/or communication components, may be configured to provide computing and/or communications capabilities to one or more users. As the size of a data center increases, the number of components and/or the size of equipment utilized may increase. Accordingly, in many data centers resources, such as cooling, power, and other environmental resources may be in demand. While allocation of such resources may be manually configured upon creation of the data center, oftentimes, such a solution is far from ideal because as the manual configuration may result in miscalculations. Additionally, as demands of the data center change over time, such manual solutions may be ineffective.
Included are embodiments for capacity planning. At least one embodiment includes a computer fluid dynamics (CFD) component configured to model a data center, the data center including at least one component and a monitor component configured to receive data associated with the modeled data center and translate the received data for 3-dimensional (3-D) modeling. Some embodiments include a diagnostics component configured to determine, from the 3-D modeling, at least one point of potential error and a 3-dimensional (3-D) visualization component configured to receive the translated data and provide a 3-D visualization of the data center, the 3-D visualization configured to provide visual representation of the at least one point of possible error.
Also included are embodiments of a method. At least one embodiment of a method includes receiving thermal data associated with at least one component at a data center and performing at least one calculation on the thermal data to determine a 3-dimensional (3-D) model of the data center. Some embodiments include determining, from the 3-D modeling, at least one point of potential error and providing, from the received translated data, a 3-D visualization of the data center, the 3-D visualization configured to provide visual representation of the at least one point of possible error.
Other embodiments and/or advantages of this disclosure will be or may become apparent to one with skill in the art upon examination of the following drawings and detailed description. It is intended that all such additional systems, methods, features, and advantages be included within this description and be within the scope of the present disclosure.
Many aspects of the disclosure can be better understood with reference to the following drawings. The components in the drawings are not necessarily to scale, emphasis instead being placed upon clearly illustrating the principles of the present disclosure. Moreover, in the drawings, like reference numerals designate corresponding parts throughout the several views. While several embodiments are described in connection with these drawings, there is no intent to limit the disclosure to the embodiment or embodiments disclosed herein. On the contrary, the intent is to cover all alternatives, modifications, and equivalents.
As the demands of data centers evolve, thermal capacity data may be utilized for optimization of the data center. At least one embodiment disclosed herein includes visually comparing current data center thermal operation with the maximum operation, as predicted by computer fluid dynamics (CFD) to provide visual capacity planning.
Referring to the drawings,
More specifically, the data center 106a may include a sub-network 105a for facilitating communication of data and/or power to the equipment of the data center 106a. As illustrated, the data center 106 may include one or more computer room air conditioning units (CRACs) 108a for providing ventilation and/or cooling. Similarly, the data center 106a may include a fan 110a for further facilitating cooling of the data center equipment. The data center 106a may also include one or more racks 112 and/or one or more local computing devices for providing information to users. Similarly, the network 100 may be coupled to one or more other data centers 106b, which may include a CRAC 108b, fan 110b, rack 112b, local computing device 114b, and/or other equipment.
The memory component 284 can include any one or combination of volatile memory elements (e.g., random access memory (RAM, such as DRAM, SRAM, SDRAM, etc.)) and/or nonvolatile memory elements (e.g., ROM, hard drive, tape, CDROM, etc.). Moreover, the memory component 284 may incorporate electronic, magnetic, optical, and/or other types of storage media. One should note that the memory 284 can have a distributed architecture (where various components are situated remote from one another), but can be accessed by the processor 282.
The logic in the memory component 284 may include one or more separate programs, which may include an ordered listing of executable instructions for implementing logical functions. In the example of
A system component and/or module embodied as software may also be construed as a source program, executable program (object code), script, or any other entity comprising a set of instructions to be performed. When constructed as a source program, the program is translated via a compiler, assembler, interpreter, or the like, which may or may not be included within the memory component 284, so as to operate properly in connection with the operating system 286.
The input/output devices that may be coupled to the system I/O Interface(s) 296 may include input devices, for example but not limited to, a keyboard, mouse, scanner, touch screen, microphone, etc. Further, the Input/Output devices may also include output devices, for example but not limited to, a printer, display, speaker, etc. Finally, the Input/Output devices may further include devices that communicate both as inputs and outputs, for instance but not limited to, a modulator/demodulator (modem; for accessing another device, system, or network), a radio frequency (RF) or other transceiver, a telephonic interface, a bridge, a router, etc.
Additionally included are one or more of the network interfaces 298 for facilitating communication with one or more other devices. More specifically, network interface 298 may include any component configured to facilitate a connection with another device. While in some embodiments, among others, the remote computing device 102 can include the network interface 298 that includes a Personal Computer Memory Card International Association (PCMCIA) card (also abbreviated as “PC card”) for receiving a wireless network card, this is a nonlimiting example. Other configurations can include the communications hardware within the remote computing device 102, such that a wireless network card is unnecessary for communicating wirelessly. Similarly, other embodiments include the network interfaces 298 for communicating via a wired connection. Such interfaces may be configured with Universal Serial Bus (USB) interfaces, serial ports, and/or other interfaces.
If the remote computing device 102 includes a personal computer, workstation, or the like, the software in the memory 284 may further include a basic input output system (BIOS) (omitted for simplicity). The BIOS is a set of software routines that initialize and test hardware at startup, start the operating system 286, and support the transfer of data among the hardware devices. The BIOS is stored in ROM so that the BIOS can be executed when the remote computing device 102 is activated.
When the remote computing device 102 is in operation, the processor 282 may be configured to execute software stored within the memory component 284, to communicate data to and from the memory component 284, and to generally control operations of the remote computing device 102 pursuant to the software. Software in the memory component 284, in whole or in part, may be read by the processor 282, perhaps buffered within the processor 282, and then executed.
One should note that while the description with respect to
Additionally, while the logic is illustrated in
Referring again to the logic components 281, 283, and 285, the three dimensional (3-D) visualization logic 285 may be configured to interact with the monitor logic 283. The 3-D visualization logic may also be configured to utilize results from the CFD logic 281 as a thermal maximum limit and provide a visual output of the differences between current operations and CFD predictions. In this way, a technician and/or user may be able to determine whether the data center 106 is under-provisioned or over-provisioned and would be able to plan equipment acquisitions, cooling enhancements, and/or power enhancements across time. Additionally, such implementations may be configured to plan equipment acquisitions, cooling enhancements, power enhancements, and/or equipment location changes.
The CFD logic 281 may include a framework for extracting CFD generated data from commercial tools and translate this data into a 3-D visualization format. Some embodiments may be configured to not only extract equipment data, but also extract data regarding the physical space where the equipment resides. Since the CFD logic 281 may be configured with most data center components and geometries, a full data center layout may be modeled by the CFD logic. Some embodiments may be configured to utilize extensible markup language (XML) as an abstraction layer between CFD logic 281 and the 3-D visualization logic 287.
The monitor logic 283 may be configured to determine how to receive and/or utilize the monitoring data of data center equipment (108, 110, 112, 114), as well as how to translate the monitoring data into a format such that the 3-D visualization logic 285 can render the desired graphics. Similarly, the 3-D visualization logic 285 may be configured to create an interactive 3-D scene containing data center equipment and/or components using the layout data. Similarly, the 3-D visualization logic 285 may be further configured to aggregate the CFD data for representation using one or more different visualization models.
These models may vary, depending on the particular logic used. The CFD data may be represented using one or more different indicator schemes, (such as false coloring, shading, etc.), which may vary depending on what type of data is being analyzed. As illustrated in
Similarly, the 3-D visualization logic 285 may be configured to provide a user interface with information and controls, such as a menu bar with general options and/or a side panel with features like a legend for color/shading scale and a device tree list, as discussed below, with regard to
Also included in the nonlimiting example of
One should also note that, while the menu bar 432 is only illustrated in
CRAC influence region may be determined by using a thermal correlation index (TCI) metric. The CI may be a number between 0 and 1 and may be defined, as indicated in equation (1), which depicts a nonlimiting example of a calculation that may be utilized for determining CI. Utilizing equation (1), a CI value of 1 may indicate that the data center has reached its maximum capacity and/or that the current cooling infrastructure is not enough to maintain current or future computing capacity.
Contrary to current solutions, embodiments disclosed herein include data center visualization based on an open model, which could be used with the different tools on the market, and may be configured to provide a unique distinguisher among them: the capability of thermal capacity planning.
Additionally, the CFD logic 281 may be included and implemented using an analysis tool that predicts 3-D airflow, heat transfer and contamination distribution in and around buildings of all types and sizes as input (e.g., Flovent). The CFD logic 281 may be configured to take the geometry and the resulting temperature of the data center and translate this information into an XML format that the 3-D visualization logic understands. The CFD logic 281 may also be configured to calculate different thermal metrics that are not produced by the software, such as thermal correlation index, supply heat index (SHI), local workload placement index (LWPI), etc.
Additionally, the 3-D visualization logic 285 may be configured to receive XML formatted data with the CFD results and the additional thermal metrics and display the results inside the data center scene. At least a portion of these metrics may be visualized as dissecting planes showing colors/shading depending on the metrics bounds in the map (see
On the other hand, the monitor logic 283 may be configured to communicate with a data storage device 105, such as a DSCEM database, and retrieve temperature sensor information to display in the 3-D scene. The monitor logic 283 may be configured to connect to the monitoring agent (the DSCEM database in this case) and translate data into an XML format, for input to the 3-D monitor. The monitor logic 283 may also be configured to operate as a live streaming provider for the 3-D visualization logic 285. 3-D visualization modes may include actual temperature and temperature versus reference temperature ratio on the racks. Additionally, depending on the particular configuration, temperature bounds may be changed. The monitor logic 283 may be configured to provide detailed specific temperature ranges. Temperature on the racks 112 may be painted on the surface of each rack 112 (e.g., see
Also included in the nonlimiting example of
More specifically, in operation, the data center may be configured to reallocate devices based on the changing demand of the data center. As a nonlimiting example, a first rack may be configured to serve a first group of users. However, during peak usage times, the workload of this first group of users may increase to a capacity beyond the reasonable capabilities of the first rack 112. Accordingly, the data center may be configured to dynamically reallocate this device, the first group of users, and/or a portion of the first group of users to more efficiently serve the increased load. Similarly, during times of low activity, workload may be reallocated from a second rack to the first rack. This reallocation may be implemented on a predicted time of high usage; however this is a nonlimiting example. More specifically, the data center 106 may be configured to determine a threshold of activity and automatically reallocate when that threshold is reached. Options for reallocating these configurations may be provided upon selection of the change reallocations option 934.
One should note that while the embodiments of
One should note that while embodiments included herein discuss web-based configurations, one should note that these are nonlimiting examples. More specifically, some embodiments may be utilized without utilizing the Internet.
The embodiments disclosed herein can be implemented in hardware, software, firmware, or a combination thereof. At least one embodiment disclosed herein may be implemented in software and/or firmware that is stored in a memory and that is executed by a suitable instruction execution system. If implemented in hardware, one or more of the embodiments disclosed herein can be implemented with any or a combination of the following technologies: a discrete logic circuit(s) having logic gates for implementing logic functions upon data signals, an application specific integrated circuit (ASIC) having appropriate combinational logic gates, a programmable gate array(s) (PGA), a field programmable gate array (FPGA), etc.
One should note that the flowcharts included herein show the architecture, functionality, and operation of a possible implementation of software. In this regard, each block can be interpreted to represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that in some alternative implementations, the functions noted in the blocks may occur out of the order and/or not at all. For example, two blocks shown in succession may in fact be executed substantially concurrently or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.
One should note that any of the programs listed herein, which can include an ordered listing of executable instructions for implementing logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. In the context of this document, a “computer-readable medium” can be any means that can contain, store, communicate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. The computer readable medium can be, for example but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device. More specific examples (a nonexhaustive list) of the computer-readable medium could include an electrical connection (electronic) having one or more wires, a portable computer diskette (magnetic), a random access memory (RAM) (electronic), a read-only memory (ROM) (electronic), an erasable programmable read-only memory (EPROM or Flash memory) (electronic), an optical fiber (optical), and a portable compact disc read-only memory (CDROM) (optical). In addition, the scope of the certain embodiments of this disclosure can include embodying the functionality described in logic embodied in hardware or software-configured mediums.
One should also note that conditional language, such as, among others, “can,” “could,” “might,” or “may,” unless specifically stated otherwise, or otherwise understood within the context as used, is generally intended to convey that certain embodiments include, while other embodiments do not include, certain features, elements and/or steps. Thus, such conditional language is not generally intended to imply that features, elements and/or steps are in any way required for one or more particular embodiments or that one or more particular embodiments necessarily include logic for deciding, with or without user input or prompting, whether these features, elements and/or steps are included or are to be performed in any particular embodiment.
It should be emphasized that the above-described embodiments are merely possible examples of implementations, merely set forth for a clear understanding of the principles of this disclosure. Many variations and modifications may be made to the above-described embodiment(s) without departing substantially from the spirit and principles of the disclosure. All such modifications and variations are intended to be included herein within the scope of this disclosure.
Filing Document | Filing Date | Country | Kind | 371c Date |
---|---|---|---|---|
PCT/US2008/067447 | 6/19/2008 | WO | 00 | 10/29/2010 |
Publishing Document | Publishing Date | Country | Kind |
---|---|---|---|
WO2009/154623 | 12/23/2009 | WO | A |
Number | Name | Date | Kind |
---|---|---|---|
5995729 | Hirosawa et al. | Nov 1999 | A |
6574104 | Patel et al. | Jun 2003 | B2 |
6694759 | Bash et al. | Feb 2004 | B1 |
6775997 | Bash et al. | Aug 2004 | B2 |
6813897 | Bash et al. | Nov 2004 | B1 |
6819563 | Chu et al. | Nov 2004 | B1 |
6862179 | Beitelmal et al. | Mar 2005 | B2 |
7020586 | Snevely | Mar 2006 | B2 |
7085133 | Hall | Aug 2006 | B2 |
7197433 | Patel et al. | Mar 2007 | B2 |
7295960 | Rappaport et al. | Nov 2007 | B2 |
7386426 | Black et al. | Jun 2008 | B1 |
7596476 | Rasmussen et al. | Sep 2009 | B2 |
7620613 | Moore et al. | Nov 2009 | B1 |
7630867 | Behrens et al. | Dec 2009 | B2 |
7676280 | Bash et al. | Mar 2010 | B1 |
7726144 | Larson et al. | Jun 2010 | B2 |
7756667 | Hamann et al. | Jul 2010 | B2 |
7864530 | Hamburgen et al. | Jan 2011 | B1 |
7881910 | Rasmussen et al. | Feb 2011 | B2 |
7885795 | Rasmussen et al. | Feb 2011 | B2 |
7970592 | Behrens et al. | Jun 2011 | B2 |
7991592 | VanGilder et al. | Aug 2011 | B2 |
8009430 | Claassen et al. | Aug 2011 | B2 |
8090476 | Dawson et al. | Jan 2012 | B2 |
8131515 | Sharma et al. | Mar 2012 | B2 |
8160838 | Ramin et al. | Apr 2012 | B2 |
8175753 | Sawczak et al. | May 2012 | B2 |
8244502 | Hamann et al. | Aug 2012 | B2 |
8315841 | Rasmussen et al. | Nov 2012 | B2 |
8346398 | Ahmed et al. | Jan 2013 | B2 |
8355890 | VanGilder et al. | Jan 2013 | B2 |
8401793 | Nghiem et al. | Mar 2013 | B2 |
8401833 | Radibratovic et al. | Mar 2013 | B2 |
8645722 | Weber et al. | Feb 2014 | B1 |
8712735 | VanGilder et al. | Apr 2014 | B2 |
8725307 | Healey et al. | May 2014 | B2 |
20040065097 | Bash et al. | Apr 2004 | A1 |
20040065104 | Bash et al. | Apr 2004 | A1 |
20040075984 | Bash et al. | Apr 2004 | A1 |
20040089009 | Bash et al. | May 2004 | A1 |
20040089011 | Patel et al. | May 2004 | A1 |
20050023363 | Sharma et al. | Feb 2005 | A1 |
20050192680 | Cascia et al. | Sep 2005 | A1 |
20050225936 | Day | Oct 2005 | A1 |
20050267639 | Sharma et al. | Dec 2005 | A1 |
20060161403 | Jiang et al. | Jul 2006 | A1 |
20070038414 | Rasmussen et al. | Feb 2007 | A1 |
20070062685 | Patel et al. | Mar 2007 | A1 |
20070078635 | Rasmussen et al. | Apr 2007 | A1 |
20070089446 | Larson et al. | Apr 2007 | A1 |
20070174024 | Rasmussen et al. | Jul 2007 | A1 |
20080204999 | Clidaras et al. | Aug 2008 | A1 |
20080269932 | Chardon et al. | Oct 2008 | A1 |
20080288193 | Claassen et al. | Nov 2008 | A1 |
20090012633 | Liu et al. | Jan 2009 | A1 |
20090113323 | Zhao et al. | Apr 2009 | A1 |
20090150123 | Archibald et al. | Jun 2009 | A1 |
20090168345 | Martini | Jul 2009 | A1 |
20090182812 | Bajpay et al. | Jul 2009 | A1 |
20090326879 | Hamann et al. | Dec 2009 | A1 |
20090326884 | Amemiya et al. | Dec 2009 | A1 |
20110040876 | Zhang et al. | Feb 2011 | A1 |
20110060561 | Lugo et al. | Mar 2011 | A1 |
20110246147 | Rasmussen et al. | Oct 2011 | A1 |
20110251933 | Egnor et al. | Oct 2011 | A1 |
20120054527 | Pfeifer et al. | Mar 2012 | A1 |
20120109404 | Pandey et al. | May 2012 | A1 |
20120150509 | Shiel | Jun 2012 | A1 |
20130096829 | Kato et al. | Apr 2013 | A1 |
20130166258 | Hamann et al. | Jun 2013 | A1 |
20130317785 | Chainer et al. | Nov 2013 | A1 |
20140074444 | Hamann et al. | Mar 2014 | A1 |
20140142904 | Drees et al. | May 2014 | A1 |
20140142905 | Drees et al. | May 2014 | A1 |
Number | Date | Country |
---|---|---|
1908590 | Feb 2007 | CN |
2003216660 | Jul 2003 | JP |
2006040095 | Sep 2006 | JP |
2009154623 | Dec 2009 | WO |
Entry |
---|
“Thermal Considertions in Cooling Large Scale High Compute Density Data Centers”, Patel, C.D.etc. The Eighth Intersociety Conference on Thermal and Thermomechanical Phenomena in Electronic Systems, 767-776. |
CN Office Action cited in Appl. No. 200880129888.1 dated May 16, 2012; 4 pages. |
Patent Cooperation Treaty, “International Search Report,” issued by the International Searching Authority in connection with related PCT application No. PCT/US2008/067447, mailed Jan. 29, 2009 (3 pages). |
Patent Cooperation Treaty, “Written Opinion of the International Searching Authority,” issued by the International Searching Authority in connection with related PCT application No. PCT/US2008/067447, mailed Jan. 29, 2009 (6 pages). |
EP; “Supplementary European Search Report” cited in EP 08771441; Nov. 30, 2012; 8 pages. |
Schmidt R R et al “Challenges of data center thermal management”, IBM Journal of Research and Development, International Business Machines Corporation NY, US. vol. 49, Sep. 1, 2005 pp. 709-723. XP002458425, ISSN: 0018-8646. |
Number | Date | Country | |
---|---|---|---|
20110060561 A1 | Mar 2011 | US |