This application includes a computer program listing appendix in electronic format. The computer program listing appendix is provided as a file entitled ComputerProgramListingAppendix.txt, created on Feb. 15, 2012, which is 27,599 bytes in size. The information in the electronic format of the computer program listing appendix is incorporated by reference, in its entirety, herein.
The present invention relates to air flow analysis, and more particularly, to techniques for using air flow distributions to model thermal zones in a space, such as in a data center.
Energy consumption has become a critical issue for large scale computing facilities (or data centers), triggered by the rise in energy costs, supply and demand of energy and the proliferation of power hungry information and communication technology (ICT) equipment. Data centers consume approximately two percent (%) of all electricity globally or 183 billion kilowatt (KW) hours of power; this power consumption is growing at a rate of 12% each year. A significant fraction of the power consumption, i.e., up to 50%, is directed to cooling the heat generating equipment. Consequently, the improvement of data center energy and cooling efficiency is very important. Although best practices have been widely publicized, data center operators are struggling to provision the right amount of cooling. In particular, it is challenging to take different heat densities within a data center into account (i.e., different areas within the data center may require very different amounts of cooling).
Therefore techniques directed to highlighting heat densities within a data center and thereby increasing cooling efficiency would be desirable.
The present invention provides techniques for using air flow distributions to model thermal zones. In one aspect of the invention, a method for modeling thermal zones in a space, e.g., in a data center, is provided. The method includes the following steps. A graphical representation of the space is provided. At least one domain is defined in the space for modeling. A mesh is created in the domain by sub-dividing the domain into a set of discrete sub-domains that interconnect a plurality of nodes. Air flow sources and sinks are identified in the domain. Air flow measurements are obtained from one or more of the air flow sources and sinks. An air flow velocity vector at a center of each sub-domain is determined using the air flow measurements obtained from the air flow sources and sinks. Each velocity vector is traced to one of the air flow sources, wherein a combination of the traces to a given one of the air flow sources represents a thermal zone in the space.
A more complete understanding of the present invention, as well as further features and advantages of the present invention, will be obtained by reference to the following detailed description and drawings.
Presented herein are techniques for dynamically creating and visualizing thermal zones, which enables better provisioning and a more efficient usage of cooling for data centers. It is notable that while the instant techniques are described in the context of a cooling system of a data center, the concepts presented herein are generally applicable to cooling and/or heating systems in general.
In
The ACUs typically receive chilled water from a refrigeration chiller plant (not shown). Each ACU typically comprises a blower motor to circulate air through the ACU and to blow cooled air, e.g., into the sub-floor plenum. As such, in most data centers, the ACUs are simple heat exchangers mainly consuming power needed to blow the cooled air into the sub-floor plenum. Typically, one or more power distribution units (PDUs) (not shown) are present that distribute power to the server racks 101.
It is important to optimize the efficiency of the ACUs. See, for example, Hamann et al., “Uncovering Energy-Efficiency Opportunities in Data Centers,” IBM Journal of Research and Development, vol. 53, no. 3 (2009) (hereinafter “Hamann”), the contents of which are incorporated by reference herein. To this end, it is useful to consider the utilization levels of the ACUs (i.e., utilization (UT)=heat removed/nominal heat load removal capacity) or coefficient of performance (COP) (COP=heat removed/power consumption for ACU fans). It has been shown that utilization levels of ACUs in some data centers are as low as 10% (COP˜1.8). However, utilization levels could potentially be in the range of from about 80% to about 100% (even with some redundancy) with corresponding COPs of from about 14 to about 18, respectively, if efficiency optimization practices are employed. Most data centers require some redundancy. For example, in a data center with eight ACUs, redundancy allows for one ACU to fail (for example due to mechanical failure). So, in essence the target utilization level cannot be ⅞ (87.5%) because then there would not be N+1 redundancy (N=8).
One of the inhibitors to optimizing ACU usage is a lack of visibility, i.e., an inability to discern to which physical areas or zones in a data center different ACUs are supplying cooled air. Thermal zones are physical areas (two-dimensional (2D)) or volumes (three-dimensional (3D)) within the data center. Each ACU supplies air to a specific thermal zone within the data center, which may also be referred to herein as a “supply zone” and typically applies to the plenum. Each ACU also gets return air from a specific thermal zone in the data center, which may also be referred to herein as a “return zone” and typically applies to the raised floor.
It is not unusual for a data center to have more than 50 ACUs somewhat randomly distributed across the data center space. The thermal zone resulting from each ACU (supply and/or return zone) is not only governed by the air flow produced by each ACU but also by the placement of vents or perforated tiles throughout the data center (because the vents or perforated tiles direct, at least in part, to where in the data center the air from the ACUs is directed, see below). It is not unusual for data centers to have more than 1,000 vents or perforated tiles. Because these thermal zones are based on the actual air flow contribution of each ACU, a corresponding efficiency or coefficient of performance (COP) can be assigned to each thermal zone of each respective ACU. The air flow distribution throughout a data center is governed by many aspects. The present techniques use the simplest form of air flow distribution in a data center by applying the zone concept to the plenum. ACUs discharge air into the plenum using fans and as a result the plenum is pressurized. The placement of the vents or perforated tiles governs where the air escapes the plenum on the raised floor. That vent/perforated tile placement determines the zones (i.e., which area is supplied by which ACU).
Disclosed herein are techniques for modeling, i.e., creating and visualizing, these thermal zones. As will be described in further detail below, a velocity field is used to define, i.e., create, the thermal zones (that is, the zones are not defined beforehand, and thus must be created). Creating/defining the zones is an important aspect of the present techniques as it provides one way to determine how efficiently each ACU is being used (via the COP measure).
In step 204, at least one domain for thermal zone modeling is defined in the space. Each domain can be a two-dimensional or three-dimensional domain. As will be described below, in one exemplary embodiment wherein a data center is being modeled, the domain is defined by the dimensions of the sub-floor plenum. As will also be described below, more than one domain can be defined for a given space. Depending on the particular application, such as the physical layout of the space to which the present techniques are applied, the domain(s) may comprise the entire space, or a portion(s) thereof.
In step 206, since finite elements are being employed to model the space, a finite element mesh is created in each of the domains by sub-dividing each domain into a set of discrete sub-domains that interconnect a plurality of nodes. As will be described in detail below, the sub-domains (also referred to herein as “elements”) can be triangles (in the case of two-dimensional domains) or tetrahedra (in the case of three-dimensional domains). The use of triangles and tetrahedra are standard choices in the finite element method. The nodes correspond to x and y coordinates (in the case of two-dimensional domains) or to x, y and z coordinates (in the case of three-dimensional domains).
In step 208, air flow sources (where the airflow enters the domain) and air flow sinks (where the air flow exits the domain) are identified in the domain. By way of reference to a data center, when the domain includes the sub-floor plenum the perforated tiles can be considered air flow sinks (because it is at the perforated tiles where the cooled air supplied by the ACUs exits the sub-floor plenum and enters the raised floor) and the ACUs can be considered air flow sources (because the airflow originates/enters the sub-floor plenum from the ACUs). On the other hand, when the domain includes the raised floor the perforated tiles can be considered air flow sources (because it is at the perforated tiles where the cooled air from the plenum enters the raised floor) and the ACUs can be considered sinks (because the warm air is cycled back into the ACUs and thus exits the raised-floor at the ACUs).
In step 210, air flow measurements are obtained from one or more of the air flow sources and sinks. According to an exemplary embodiment, these air flow measurements are obtained using mobile measurement technology (MMT). MMT is described for example in U.S. Pat. No. 7,366,632, issued to Hamann et al., entitled “Method and Apparatus for Three-Dimensional Measurements” (hereinafter “U.S. Pat. No. 7,366,632”) the contents of which are incorporated by reference herein. MMT V1.0 is a technology for optimizing data center infrastructures for improved energy and space efficiency which involves a combination of advanced metrology techniques for rapid measuring/surveying data centers (see, for example, U.S. Pat. No. 7,366,632) and metrics-based assessments and data-based best practices implementation for optimizing a data center within a given thermal envelope for optimum space and most-efficient energy utilization (see, for example, U.S. application Ser. No. 11/750,325, filed by Claassen et al., entitled “Techniques for Analyzing Data Center Energy Utilization Practices,”, the contents of which are incorporated by reference herein).
In step 212, the air flow measurements obtained from the air flow sources and sinks are used to determine an air flow velocity vector at a center of each sub-domain (element). An exemplary process for determining these air flow velocity vectors using potential flow theory is described in detail below.
In step 214, each velocity vector is traced to one of the air flow sources. By way of reference to a data center model wherein the domain comprises the sub-floor plenum, each velocity vector can be traced to a particular ACU. An exemplary process for tracing the velocity vectors is described in detail below. A combination of the traces to a given one of the air flow sources (e.g., ACU) represents a thermal zone in the space. Exemplary thermal zones are shown, for example, in
As shown in
Each time the above-described steps are repeated, updated air flow measurements from the air flow sources and sinks can be acquired (if available) therefore making the instant techniques sensitive to changing conditions within the space. By way of example only, steps 210-214 can be repeated periodically (for example at a pre-determined time interval) to acquire updated air flow measurement data. The pre-determined time interval can be set, for example, based on a frequency at which updated measurement data is available and/or a frequency with which changes occur in the space.
In step 216, a cooling capacity is determined for each air flow source. For example, with reference to data center modeling, the cooling capacity of each vent or perforated tile can be determined. An exemplary process for determining cooling capacity is described in detail below.
As highlighted above, in one exemplary embodiment, potential flow theory is employed assuming constant (temperature independent) air density, free slipping over boundaries and that viscous forces can be neglected. For a general description of potential flow theory see, for example, L. D. Landau et al., “Fluid Mechanics,” Pergamon Press (1959), the contents of which are incorporated by reference herein.
As the air velocity ν=(νx, νy, νz) is assumed to be irrotational, that is, curl ν=0, the velocity can be taken to be the gradient of a scalar function φ. This function φ is called the “velocity (or air flow) potential” and it satisfies the Poisson equation. In other words, the (air) velocity field corresponds to a solution of:
with appropriate boundary conditions. Here, f represents flow sources or sinks and νx, νy and νz are the velocity components in the x, y and z directions, respectively. It is notable that other more comprehensive partial differential equations (PDEs) can be used in accordance with the present techniques, which may include turbulence models and dissipation.
In order to provide boundary conditions for the above problem one could, for example, model vents or perforated tiles (or the output of ACUs) as “sources”
the returns to the ACUs as “sinks” (φ=0), while the racks are sinks
and sources
at the same time. As described above, the sources and sinks can vary depending on the domain (e.g., when the domain includes the sub-floor plenum the perforated tiles and ACUs can be considered sinks and sources, respectively; on the other hand when the domain includes the raised floor the perforated tiles and ACUs can be considered sources and sinks, respectively). Another alternative is to model the sources and sinks via a non-zero right-hand side f in Equation 1. If the sources or sinks are located on boundaries of the solution domain of the PDE, the former approach is appropriate (as it corresponds to the specification of Neumann boundary conditions). For sources or sinks that are located inside the solution domain (that is, not on boundaries), modeling via a non-zero right-hand side f in Equation 1 would be more suitable.
A finite element solver is implemented herein to calculate the air flow potential. For a standard reference on the finite element method, see T. J. R. Hughes, “The Finite Element Method Linear Static and Dynamic Finite Element Analysis,” Chapter 1, Dover Publications (2000) (Originally published by Prentice-Hall, 1987), the contents of which are incorporated by reference herein. One exemplary implementation of the present techniques was done in the C programming language, following some analogous finite element implementations in Matlab by J. Alberty et al., “Remarks Around 50 Lines of Matlab: Short Finite Element Implementation,” Numerical Algorithms 20, pp. 117-137 (1999) (hereinafter “Alberty”), the contents of which are incorporated by reference herein.
The solver requires specification of a mesh, which consists of a set of nodes within a specified domain, as well as triangles (for two-dimensional domains) or tetrahedra (for three-dimensional domains) connecting these nodes. As highlighted above, the triangles (or tetrahedra) are also referred to herein as elements of the mesh. The nodes correspond to (x,y) coordinates in a two-dimensional domain (or (x,y,z) coordinates in a three-dimensional domain). An example of a finite element mesh is shown illustrated in
An approximate solution,
is sought to Equation 1 as a linear combination of N basis functions Ψi, i=1, . . . , N. Letting {right arrow over (x)}j denote the j-th node in the mesh, the basis functions are chosen so that Ψi({right arrow over (x)}j)=1 if i=j and Ψi({right arrow over (x)}j)=0 otherwise. This is typical of finite element approximations and has the advantage that the coefficient {circumflex over (φ)}i in the linear combination of Equation 3 also corresponds to the approximation of the solution at the i-th node. As in Alberty, piecewise linear basis functions can be used to approximate the solution of Equation 1. Again, this is a standard choice in finite element approximations. Upon application of a Galerkin finite element discretization to Equation 1 with suitable boundary conditions one obtains a system of linear equations,
A{circumflex over (φ)}=b. (4)
The process for applying a Galerkin finite element discretization to Equation 1 to result in the system of Equations 4 would be apparent to one of skill in the art, and thus is not described further herein. The solution {circumflex over (φ)}=({circumflex over (φ)}1, . . . , {circumflex over (φ)}N) of Equation 4 gives an approximate solution to the air flow potential φ at the nodes of the finite element mesh. Since the interest is in obtaining the gradient of the potential, as it defines the velocity field in Equations 2, once the linear system of Equations 4 is solved, a numerical approximation to the gradient of φ can be obtained from the linear combination of Equation 3 and the solution {circumflex over (φ)} of the system of Equations 4. This provides the air flow field, as defined by Equations 2. However, due to the choice of basis functions, it is not meaningful to obtain an approximation to the gradient at the mesh nodes. Instead, an approximation is valid at points inside each element. The center of each element can be chosen as a convenient and standard coordinate point, i.e., at which to approximate the velocity field. Thermal zones can then be defined from trajectories of the air flow, as described below.
A two-dimensional implementation of the above techniques to calculate the air flow potential for the sub-floor plenum area of a large (e.g., greater than 50,000 square feet) data center will now be described. In the following, the focus is on “plenum” thermal zones but the same principles can be applied to “above plenum” or the total raised floor data center.
As described above, with a domain defined by the sub-floor plenum the ACUs can be the air flow sources while the perforated tiles can be the air flow sinks applied as Neumann boundary conditions. In this particular embodiment, the right hand side of Equation 1 is set to zero. As highlighted above, air flow measurement data for the ACUs and the perforated tiles can be obtained using MMT. Suitable techniques to obtain the required input data for both the ACUs and perforated tiles are also described, for example, in H. F. Hamann, et al., “Methods and Techniques for Measuring and Improving Data Center Best Practices” IEEE Proceedings of the ITherm 2008 Conference, Orlando, Fla., pp. 1146-1152 (May 2008), the contents of which are incorporated by reference herein.
An enlarged view of a portion 500 of graphical representation 400 shown in
Once the air flow velocity field has been calculated (according to Equation 2, above), the air flow from/to each area of the data center is traced back to the originating/returning ACU (the air flow velocity vectors collectively indicate air flow patterns, which is why the velocity field is used to define the thermal zones (which are given by these air flow patterns)). An exemplary methodology for tracing the air flow in a data center is described, for example, in conjunction with the description of
These traces (also referred to herein as “air flow trajectories”), which are paths that individual particles follow, as well as the corresponding thermal zones, are shown in
p=R×f2
At this point it is worth noting that, as long as such changes do not involve modifications to the finite element mesh, repeating the calculations will require less computing time since the nodes and elements do not have to be regenerated each time. Furthermore, if a direct solver for linear systems is used to solve Equations 4, savings in computation time are also possible as long as the matrix A in Equations 4 remains unchanged (which is the case if the conditions changing correspond only to modifications in the measured flow at the sources and sinks, as this results only in a modified right-hand side b in Equations 4). As a direct solver typically employs two phases, numerical factorization of the coefficient matrix followed by the solution of the system with the factored matrix, the numerical factorization of the coefficient matrix need only be done once, as long as the matrix remains unchanged. The numerical factorization is the most time consuming of the two phases, so doing the factorization only when strictly needed can result in considerable savings in computational time.
The present techniques may include the exploitation of the superposition principle, which may provide ways to faster solve the respective air flow patterns for varying conditions by avoiding redundant calculations. Specifically, Equation 1 being solved for the potential φ can be re-written as ∇·(∇φ)=f, where ∇· is the divergence operator. One can use the principle of superposition and sum two solutions φ1 and φ2 of Equation 1 (or, generally, any number of solutions) to obtain a third solution φ3=φ1+φ2 as long as φ1 and φ2 are solutions to Equation 1 for the same domain (geometry). That is, say φ1 solves ∇·(∇φ1)=f1 and φ2 solves ∇·(∇φ2)=f2, then φ3=φ2+φ2 solves ∇·(∇φ3)=f1+f2. For example, say φ1 is a solution obtained with only ACU1 on (at a given fan speed setting), while all the other ACUs were off, and φ2 is a solution obtained with only ACU2 on (at a given fan speed setting), while all the other ACUs were off. The velocity field for the scenario with only ACU1 on is ν1=∇φ1 and the velocity field for the scenario with only ACU2 on is ν2=∇φ2. Then ν3=ν1+ν2 corresponds to a velocity field for the scenario with ACU1 and ACU2 on (at the corresponding fan speeds for which the original scenarios were obtained), while all other ACUs are off.
In one exemplary embodiment a graphical software (e.g., MMT client) is used to provide a graphical representation of the space, e.g., data center, define the domains, edit sinks and sources, feed sensor data (if available) to define sinks and sources, initialize the calculations, postprocess and visualize the thermal zones. MMT client is a software application, which allows graphically displaying the data center layout and also visualizing the air flow trajectories, the zones and the air flow vectors.
See, for example,
As highlighted above, the air flow at one or more of the sources/sinks can change. Further, sources/sinks can be added to and/or removed from the domain.
with ρ as the density of air, A as the area of the tile and K as a loss coefficient. The pressure difference the air flow are related as follows,
Δp=R·f2airflow.
The pressure differential is re-measured with real-time sensors and use Δp=R·f2airflow to calculate air flow through each tile. As discussed the air flow is applied as a boundary.
Once the model has been defined and set up (i.e., a graphical representation has been provided, all domains have been defined, etc.) the thermal zones can be modeled (as described above) either based on user input or triggered by measured change in the data center using a sensor network (which, for example, can detect air flow changes and/or the addition/removal of sources/sinks from the domain, see above).
Finally, it is notable that the present techniques can also be used to determine a respective cooling capacity from each vent or perforated tile. Namely, in conjunction with a temperature model (as described in Hamann) the vent or perforated tile discharge temperatures TD can be calculated (i.e., once a velocity field {right arrow over (ν)}=(νx, νy, νz) is obtained, it is used in the energy equation ρcpνgrad(T)+div (kg rad(T))=0 with the temperature prescribed as the boundaries (e.g., at the inlet and outlet of the servers) in order to solve for the temperature distribution). Thus, the temperature distribution can be calculated (for example in two-dimensions) as a function of an x and y coordinate T(x,y). By knowing where (xt,yt) the perforated tiles are one can get the tile discharge temperature Td=T(xt,yt).
In combination with the air flow velocity or total air flow and an allowable inlet temperature Tinlet for the server, the cooling power per tile can be determined for each vent/perforated tile by Pcool≈(Tinlet−TD)·flow/3140[cfmK/kW]. In case the velocity vectors have been calculated (rather than measured and used as a boundary as in this embodiment), the flow can be obtained by integrating the perpendicular velocity vector component (most often νz) over the area of the tile or vent flow=∫ν⊥dA. An example, of this feature is shown in
In step 1206, a new next location is determined, i.e., calculated. Using the above example of a data center tile as a location point, in this step the next tile is selected. In a two-dimensional domain, the next location xt and yt can be selected as follows:
xt(steps)=(xt(steps−1)+vx*stepsize*n);
and
yt(steps)=(yt(steps−1)+vy*stepsize*n),
wherein the magnitude of the velocity vector v=sqrt(vx2+vy2) and n=0.2/v. v=sqrt(vx2+vy2) . . . n=0.2/v is a parameter which controls the stepsize in relationship to vx and vy and prevents making too small of a step if the velocity vectors are small. In other words, the variable n is introduced to make sure that, if velocities are very small, one still moves a sizeable step from one location to the next so that the methodology is not too slow. Stepsize can be chosen by the user.
In step 1208, a determination is made as to whether the new location falls within an area of an airflow source, e.g., an ACU. If the trajectory intersects with an ACU, then in step 1210, the previous location is assigned to that ACU. For example, if tile 1 is the initial (previous) location and tile 2 is the new location, and tile 2 falls within an area of ACU 1, then tile 1 would be assigned to ACU 1. The process is then repeated (for n number of locations, e.g., tiles, throughout the space, e.g., data center) by referencing the velocity vector for the new location, e.g., tile 2, (step 1204), determining another new location, e.g., of a tile 3, (step 1206) and so on. The ACU location is known for example from the graphical representation in the MMT client. That representation includes the x,y coordinates of the ACU as well as the width, length and height (i.e., defining the area) of the ACU. Once xt and yt (see step 1206) are within ACU area, another new location is determined, and so on.
On the other hand, if the new location (e.g., tile 2 using the above example) does not fall within an area of an ACU, then a determination can be made in step 1212 as to whether or not too many steps have been made, i.e., there is a maximum number of steps in case the trajectory does not end up at an ACU. Another way to look at it is there may be a limit imposed on the number of times steps 1204 and 1206 can be repeated without having a location (e.g., tile) fall within the area of an air flow source. If in fact too many steps have been made (i.e., the limit has been reached), then in step 1214, the previous location, e.g., tile 1, is designated as a “no zone,” meaning that location is not associated with a thermal zone. The process is then repeated by referencing the velocity vector for the new location, e.g., tile 2, (step 1204), determining another new location, e.g., of a tile 3, (step 1206) and so on for n locations throughout the space. On the other hand, if the maximum number of steps has not been exceeded (i.e., the limit has not been reached) (and the new location, e.g., tile 2, does not fall within the area of an ACU (step 1208)) then the process beginning again at step 1204 is repeated, i.e., by referencing the velocity vector for the new location, e.g., tile 2, (step 1204), determining another new location, e.g., of a tile 3, (step 1206) and so on. Methodology 1200 is repeated for each of n locations, i.e., tiles, in the space, i.e., data center. In this manner, each velocity vector is traced to a particular air flow source (e.g., ACU) or designated as not being associated with a particular thermal zone. Exemplary code for tracing velocity vectors to a particular air flow source and thereby defining thermals zones in a space is provided in the computer program listing appendix.
Turning now to
Apparatus 1300 comprises a computer system 1310 and removable media 1350. Computer system 1310 comprises a processor device 1320, a network interface 1325, a memory 1330, a media interface 1335 and an optional display 1340 (for displaying, e.g., graphical interface 800 of
As is known in the art, the methods and apparatus discussed herein may be distributed as an article of manufacture that itself comprises a machine-readable medium containing one or more programs which when executed implement embodiments of the present invention. For instance, the machine-readable medium may contain a program configured to provide a graphical representation of the space; define at least one domain in the space for modeling; create a mesh in the domain by sub-dividing the domain into a set of discrete sub-domains that interconnect a plurality of nodes; identify air flow sources and sinks in the domain; obtain air flow measurements from one or more of the air flow sources and sinks; determine an air flow velocity vector at a center of each sub-domain using the air flow measurement obtained from the air flow sources and sinks; and trace each velocity vector to one of the air flow sources, wherein a combination of the traces to a given one of the air flow sources represents a thermal zone in the space.
The machine-readable medium may be a recordable medium (e.g., floppy disks, hard drive, optical disks such as removable media 1350, or memory cards) or may be a transmission medium (e.g., a network comprising fiber-optics, the world-wide web, cables, or a wireless channel using time-division multiple access, code-division multiple access, or other radio-frequency channel). Any medium known or developed that can store information suitable for use with a computer system may be used.
Processor device 1320 can be configured to implement the methods, steps, and functions disclosed herein. The memory 1330 could be distributed or local and the processor 1320 could be distributed or singular. The memory 1330 could be implemented as an electrical, magnetic or optical memory, or any combination of these or other types of storage devices. Moreover, the term “memory” should be construed broadly enough to encompass any information able to be read from, or written to, an address in the addressable space accessed by processor device 1320. With this definition, information on a network, accessible through network interface 1325, is still within memory 1330 because the processor device 1320 can retrieve the information from the network. It should be noted that each distributed processor that makes up processor device 1320 generally contains its own addressable memory space. It should also be noted that some or all of computer system 1310 can be incorporated into an application-specific or general-use integrated circuit.
Optional video display 1340 is any type of video display suitable for interacting with a human user of apparatus 1300. Generally, video display 1340 is a computer monitor or other similar video display.
The example code contained in the computer program listing appendix is in a pv-wave programming language for tracing velocity vectors to a particular air flow source and thereby defining thermals zones in a space.
Although illustrative embodiments of the present invention have been described herein, it is to be understood that the invention is not limited to those precise embodiments, and that various other changes and modifications may be made by one skilled in the art without departing from the scope of the invention.
Number | Name | Date | Kind |
---|---|---|---|
4895442 | Boutier et al. | Jan 1990 | A |
7024342 | Waite et al. | Apr 2006 | B1 |
7366632 | Hamann et al. | Apr 2008 | B2 |
20020144473 | Satomi et al. | Oct 2002 | A1 |
20020148222 | Zaslavsky et al. | Oct 2002 | A1 |
20030019606 | Stauder et al. | Jan 2003 | A1 |
20030147770 | Brown et al. | Aug 2003 | A1 |
20070062685 | Patel et al. | Mar 2007 | A1 |
20070119603 | Haaland et al. | May 2007 | A1 |
20080015440 | Shandas et al. | Jan 2008 | A1 |
20080040067 | Bashor et al. | Feb 2008 | A1 |
20080115950 | Haaland et al. | May 2008 | A1 |
20080155441 | Long et al. | Jun 2008 | A1 |
20080158815 | Campbell et al. | Jul 2008 | A1 |
20080174954 | VanGilder et al. | Jul 2008 | A1 |
20080282948 | Quenders et al. | Nov 2008 | A1 |
20080288193 | Claassen et al. | Nov 2008 | A1 |
20090138313 | Morgan et al. | May 2009 | A1 |
20090326879 | Hamann et al. | Dec 2009 | A1 |
20090326884 | Amemiya et al. | Dec 2009 | A1 |
20100076607 | Ahmed et al. | Mar 2010 | A1 |
20100082309 | Dawson et al. | Apr 2010 | A1 |
20100305911 | Samiyilbas et al. | Dec 2010 | A1 |
20100312415 | Loucks | Dec 2010 | A1 |
20100328890 | Campbell et al. | Dec 2010 | A1 |
20100328891 | Campbell et al. | Dec 2010 | A1 |
20110040532 | Hamann et al. | Feb 2011 | A1 |
Number | Date | Country | |
---|---|---|---|
20110040529 A1 | Feb 2011 | US |