N/A.
Solar access refers to the characterization of solar radiation exposure at a designated location. Solar access accounts for daily and seasonal variations in solar radiation exposure that may result from changes in atmospheric clearness, shading by obstructions, or variations in incidence angles of the solar radiation due to the relative motion between the Sun and the Earth. Prior art measurement systems for characterizing solar access typically enable solar access to be determined only at the locations where the measurement systems are positioned. As a result, these measurement systems are not well suited for determining solar access at locations that are inaccessible or remote from the measurement systems, which may be a disadvantage in a variety of contexts.
For example, determining solar access at an installation site of a solar energy system prior to installing the solar energy system may provide the advantage of enabling solar panels within the solar energy system to be positioned and/or oriented to maximize the capture of solar radiation. However, due to space constraints, or in order to minimize shading by obstructions that reduce solar radiation exposure, installation sites of solar energy systems are typically relegated to rooftops or other remote locations. Determining solar access at these installation sites relies on positioning a prior art solar access measurement system on the rooftop or other remote location, which may be a time-consuming, inconvenient, or unsafe task.
In another example, determining solar access of a proposed installation site of a solar energy system in the design phase of a building may provide the advantage of enabling the proposed installation site to be moved at low cost, prior to construction of the building, in the event that the solar radiation exposure at the originally proposed installation site were determined to be inadequate. However, the prior art solar access measurement systems are unsuitable for determining solar access in the building's design phase when the installation sites are not yet accessible to accommodate positioning of the solar access measurement system.
Determining solar access at various positions on a proposed building may also be advantageous to establish the placement of windows, air vents and other building elements. However, prior art solar access measurement systems are also of little use in this context, where the locations of these building elements are typically not accessible for placement of the measurement systems.
A technique disclosed in a website having the URL “http://www.solarpathfinder.com/formulas.html?id=LIDxqaCI” estimates shading by obstructions at a location that is different from where a solar access measurement system is positioned. However, this technique applies only to a location that has a vertical offset from where the solar access measurement system is positioned. In addition, this technique relies on measuring the distance to the obstructions that cause the shading, which may be time-consuming or impractical, depending on the physical attributes of the terrain that contains the obstructions.
In view of the above, there is a need for improved capability to determine solar access at one or more locations that are remote from where a solar access measurement system is positioned.
The embodiments of the present invention may be better understood from the following detailed description when read with reference to the accompanying Figures. The features in the Figures are not necessarily to scale. Emphasis is instead placed upon illustrating the principles and elements of the embodiments of the present invention. Wherever practical, like reference designators in the Figures refer to like features.
In this example, the position P1 has coordinates (0, 0, z1) and the position P2 has coordinates (0, 0, z2) along corresponding axes in a Cartesian x, y, z coordinate system, indicating that there is an offset in the vertical, or “z” direction between the position P1 and the position P2. The z axis, in this example, has a direction that is anti-parallel to the Earth's gravity vector G. The position P3 has coordinates (x3, y3, z3), indicating that in this example physical context CT, the position P3 has an offset from the position P1 and the position P2 in each of the “x” direction, the “y” direction and the “z” direction. In alternative examples, the position P3 is offset or remote from the positions P1, P2 in only one or two of the “x”, “y”, and “z” directions.
The geometric construction line CL1 between the position P1 and the interface INT has an azimuth angle Φ1, based on projection of the construction line CL1 into the plane z=z1 (not shown). Due to the vertical or “z” direction offset between the position P1 and the position P2, the geometric construction line CL2 between the position P2 and the interface INT also has the azimuth angle Φ1 based on projection of the construction line CL2 into the plane z=z1. The construction line CL1 has an elevation angle θ1 relative to a plane z=z1, whereas the construction line CL2 has an elevation angle θ2 relative to a plane z=z2 (not shown). The geometric construction line CL3 between the position P3 and the interface INT has an azimuth angle Φ3 based on projection of the construction line CL3 into the plane z=z3 (not shown). Typically, the azimuth angle Φ3 is different from the azimuth angle Φ1. The construction line CL3 has a third elevation angle θ3 relative to a plane z=z3. In the example physical context CT, the position P1 is a distance L from the interface INT, in the direction of the azimuth angle Φ1, as indicated by projection of the construction line CL1 into the plane z=z1. The interface INT has a height H from the plane z=0.
According to one embodiment of the extrapolation system 10, step 2 includes positioning the measurement system 14 at the position P1, where the skyline imaging system 22 then acquires the orientation-referenced image I1. Step 4 includes positioning the measurement system 14 at the position P2, where the skyline imaging system 22 then acquires the orientation-referenced image I2. The images I1, I2 acquired by the skyline imaging system 22 according to steps 2 and 4 of the extrapolation system 10 are typically provided to a processor 28 that is enabled to provide the output parameter 11 according to step 6 of the extrapolation system 10 (shown in
The SOLMETRIC SUNEYE, a commercially available product from SOLMETRIC Corporation of Bolinas, Calif., USA provides one example of a hardware and software context suitable for implementing various aspects of the measurement system 14. The SOLMETRIC SUNEYE includes a skyline imaging system 22 that is enabled to provide the orientation-referenced images I1, I2 of the relevant skylines at the positions P1, P2, respectively. Points within the orientation-referenced image I1 provided by the SOLMETRIC SUNEYE have mappings to a first set of azimuth angles and elevation angles. Points within the orientation-referenced image I2 provided by the SOLMETRIC SUNEYE have mappings to a second set of azimuth angles and elevation angles. Each of the sets of azimuth angles and elevation angles are typically established through calibration of the field of view of the skyline imaging system 22.
The calibration typically includes placing the SOLMETRIC SUNEYE at a designated physical location, with a designated reference heading and a level orientation. The calibration then includes capturing a calibration image that includes one or more physical reference positions that are each at a predetermined azimuth angle and elevation angle in the field of view of the skyline imaging system 22. From the predetermined azimuth angles and elevation angles of the one or more physical reference positions in the calibration image, other points in the field of view of the skyline imaging system 22 may be mapped to corresponding azimuth angles and elevation angles using look-up tables, curve fitting or other suitable techniques. The calibration used in the SOLMETRIC SUNEYE typically accommodates for image distortion, aberrations, or other anomalies in the field of view of the skyline imaging system 22 of the SOLMETRIC SUNEYE.
In one example implementation of steps 2 and 4 of the extrapolation system 10, the SOLMETRIC SUNEYE acquires the images I1, I2 with an image sensor 24 that includes a digital camera and a fisheye lens or other wide field of view lens that are integrated into a skyline imaging system 22 of the SOLMETRIC SUNEYE. The digital camera and the fisheye lens have a hemispherical field of view suitable for providing digital images that represent the relevant skyline at each of the positions P1, P2. The images I1, I2 that are provided by the SOLMETRIC SUNEYE each have a level orientation, and a heading orientation (typically south-facing in the Earth's northern hemisphere, and north-facing in the Earth's southern hemisphere) when each of the images I1, I2 is acquired. As a result of the calibration of the field of view of the skyline imaging system 22 of the SOLMETRIC SUNEYE, each point in the resulting image I1 has a corresponding pair of referenced azimuth angles and elevation angles associated with it. Similarly, each point in the resulting image I2 also has a corresponding pair of referenced azimuth angles and elevation angles associated with it. Each point in the field of view of the skyline imaging system 22 may be represented by a portion of a pixel, or by a group of one or more pixels in the digital images that represent the relevant skylines captured in the images I1, I2.
Other examples of commercially available products that may be used to implement various aspects of the measurement system 14 acquire the image I1 by first projecting a first corresponding image of the relevant skyline on a reflective or partially reflective contoured surface at the position P1. These commercially available products then capture a first digital image of the first corresponding image that is projected on the contoured surface. The image I2 is acquired by first projecting a second corresponding image of the relevant skyline on a reflective or partially reflective contoured surface at the position P2. These commercially available products then capture a second digital image of the second corresponding image that is projected on the contoured surface. Each of the resulting first and second digital images typically represents a hemispherical or other-shaped field of view suitable for establishing the images I1, I2 of the relevant skylines at each of the positions P1, P2, respectively. The images I1, I2 provided by these types of measurement systems 14 each have a level orientation or level reference, and/or a heading orientation or a heading reference (typically south-facing in the Earth's northern hemisphere, and north-facing in the Earth's southern hemisphere) when each of the first and second digital images is captured. Accordingly, as a result of calibration of this type of measurement system 14, each point in the resulting image I1 has a corresponding pair of referenced azimuth and elevation angles associated with it, and each point in the resulting image I2 has a corresponding pair of referenced azimuth and elevation angles associated with it. Typical calibration schemes for these types of measurement systems 14 include establishing one or more scale factors and/or rotational corrections for the captured digital images, typically based on the relative positions of physical features present in the captured digital images and then applying the scale factors and/or rotational corrections so that points in each of the first and second digital images may be mapped to corresponding azimuth angles and elevation angles.
The image sensor 24 in the skyline imaging system 22 of the measurement system 14 may also acquire each of the images I1, I2, based on one or more sectors or other subsets of a hemispherical, or other-shaped field of view at each of the positions P1, P2, respectively. To achieve a resulting field of view of the relevant skyline in each of the images I1, I2 that is sufficiently wide to establish the output parameter 11, the one or more sectors or other subsets acquired by the image sensor 24 in the skyline imaging system 22 may be digitally “stitched” together using know techniques. One example of a skyline imaging system 22 that is suitable for establishing each of the images I1, I2 based on multiple sectors is provided by M. K. Dennis, An Automated Solar Shading Calculator, Proceedings of Australian and New Zealand Solar Energy Society, 2002.
Points within the image I1 have a mapping to corresponding azimuth angles and elevation angles, so that each point on the detected skyline 13a in the image I1 has an associated azimuth angle and elevation angle. For the purpose of illustration, the azimuth angle to an example point on the interface INT within the image I1 is indicated by the reference element Φ1 and the elevation angle to the example point on the interface INT within the image I1 is indicated by the reference element θ1. Azimuth angles are indicated relative to an axis defining the heading reference REF. The elevation angle θ1 to the point on the interface INT at the azimuth angle Φ1 is represented by the radial distance from a circumference C1 to the interface INT toward the origin OP1 of the image I1. In this example, the origin OP1 in the image I1 corresponds to the position P1 in the physical context CT of
Points within the image I2 have a mapping to corresponding azimuth angles and elevation angles, so that each point on the detected skyline 13b in the image I2 has an associated azimuth angle and elevation angle. For the purpose of illustration, the azimuth angle to the example point on the interface INT within the image I2 is indicated by the reference element Φ1 and the elevation angle to the example point on the interface INT within the image I2 is indicated by the reference element θ2. Azimuth angles are also indicated relative to the axis defining the heading reference REF. The elevation angle θ2 to the point on the interface INT at the azimuth angle Φ1 is represented by the radial distance from a circumference C2 to the interface INT toward an origin OP2 in the image I2. In this example, the origin OP2 corresponds to the position P2 in the physical context CT of
The SOLMETRIC SUNEYE is enabled to automatically provide a detected skyline 13a, 13b for each of the images I1, I2, respectively, that are acquired by the SOLMETRIC SUNEYE. HOME POWER magazine, ISSN1050-2416, October/November 2007, Issue 121, pages 88-90, herein incorporated by reference, shows an example wherein processing of an acquired image by the SOLMETRIC SUNEYE provides enhanced contrast between open sky 13 and obstructions OBS in the relevant skyline, which is used to automatically define multiple points on the interface INT that form a detected skyline. The SOLMETRIC SUNEYE also provides for manual correction, enhancement, or modification to the automatically detected skyline by a user of the SOLMETRIC SUNEYE. The detected skylines provided by the SOLMETRIC SUNEYE are suitable for establishing the detected skylines 13a, 13b within each of the images I1, I2, respectively, that are acquired by the SOLMETRIC SUNEYE. Measurement systems 14, such as those disclosed by M. K. Dennis, An Automated Solar Shading Calculator, Proceedings of Australian and New Zealand Solar Energy Society, 2002, typically include processes or algorithms to distinguish between the open sky 13 and obstructions OBS, and are suitable for computing, detecting or otherwise establishing the detected skyline 13a, 13b from the images I1, I2, respectively, that are acquired by the skyline imaging system 22 within the measurement systems 14.
Measurement systems 14 that rely on projecting images of the relevant skyline onto a contoured surface may provide for manual designation of the detected skylines 13a, 13b within the corresponding images that are projected onto a contoured surface. These measurement systems 14 may alternatively provide for user-entered designations or other manipulations of subsequent digital images that are captured of the projected images, and are suitable for computing, detecting or otherwise establishing the detected skyline 13a, 13b from the images I1, I2, respectively, that are acquired by the skyline imaging system 22 within the measurement systems 14.
The field of view I3 in the examples of
Each point on the interface INT on the detected skyline 13c of
In the field of view I3, the origin OP3 has a mapping to an elevation angle of 90 degrees, and the circumference C3 has a mapping to an elevation angle of 0 degrees. In the example where the image sensor 24 has a field of view of one hundred eighty degrees, the circumference C3 corresponds to the plane z=z3 shown in the physical context CT of
H=L tan(θ1)+z1 (1)
L=(z2−z1)/ (tan (θ1)−tan (θ2)) (2)
In equations (1) and (2), the elevation angles θ1, θ2, at each azimuth angle Φ1, are extracted from the images I1, I2, based on the mapping of points in each of the images I1, I2 to corresponding azimuth angles and elevation angles. The coordinates z1 and z2 associated with the positions P1, P2, respectively, have been previously designated in the example physical context CT shown in
INT
r=(L2+H2)1/2 (3)
INT
θ=tan−1(H/L) (4)
INTΦ=Φ1 (5)
The Cartesian coordinates (INTx, INTy, INTz) of the point on the interface INT may be determined from the spherical coordinates (INTr, INTθ, INTΦ) of the point on the interface INT according to equations (6)-(8):
INT
x
=INT
r cos (INTθ) cos (INTΦ) (6)
INT
y
=INT
r cos (INTθ) sin (INTΦ) (7)
INT
z
=INT
r sin (INTθ) (8)
Φ3=tan−1((INTy−y3)/(INTx−x3)) (9)
θ3=tan−1((INTz−z3)/((INTx−x3)2+(INTy−y3)2)1/2) (10)
Equations (9) and (10) are suitable for providing, as an output parameter 11, a mapping from one or more points present within both of the acquired images I1, I2 at the same azimuth angle but at different elevation angles θ1, θ2, respectively, to corresponding one or more points with azimuth angles Φ3 and elevation angles θ3 referenced to the position P3. Determining the azimuth angles Φ3 and the elevation angles θ3 referenced to the position P3 enables the solar access 15 or other output parameters 11 to be referenced to the position P3, even though the SOLMETRIC SUNEYE or other measurement system 14 used to acquire the images I1, I2, is typically not positioned at the position P3, and typically does not acquire an image or other measurement with the measurement system 14 positioned at the position P3.
The coordinates of the positions P1, P2, P3 used in equations (1)-(10) are typically user-entered or otherwise provided to the processor 28 as a result of GPS (global positioning system) measurements, dead reckoning, laser range-finding, electronic measurements, physical measurements, or any other suitable methods or techniques for determining or otherwise establishing measurements, coordinates, locations, or physical offsets between the positions P1, P2, P3.
Errors in the determination of the output parameters 11 by the measurement system 14 typically decrease as the vertical, or “z” direction offset, z2−z1, between the position P2 and the position P1 increases. Errors in the determination of the output parameters 11 by the measurement system 14 typically decrease as the offset between the position P3 and each of the positions P1, P2 decreases. Accordingly, the vertical offset between the position P2 and P1 is typically designated to be large enough, and the offset of the position P3 from the positions P1, P2 is designated to be small enough so that errors in the output parameters 11 that are attributable to the measurement system 14 are sufficiently small.
The output parameter 11 provided in step 6 of the extrapolation system 10 may also include a determination of solar access 15, a characterization of solar radiation exposure at a designated location and/or orientation. Solar access 15 typically accounts for time-dependent variations in solar radiation exposure that occur at the designated location on daily, seasonal, or other timescales due to the relative motion between the Sun and the Earth. These variations in solar radiation exposure are typically attributable to shading from buildings, trees or other obstructions OBS, variations in atmospheric clearness, or variations in incidence angles of solar radiation at the designated location and orientation where the solar access is determined. Solar access 15 may be expressed by available energy provided by the solar radiation exposure, by percentage of energy of solar radiation exposure, by irradiance in kilowatt-hours or other energy measures, by graphical representations of solar radiation exposure versus time, by measures of insolation such as kilowatt-hours per square meter, by an overlay of the paths of the Sun on the detected skyline 13c, or other relevant skyline, as shown in
The solar access 15 or other output parameter 11 provided in step 6 of the extrapolation system 6 is typically stored in a memory and may be presented on a display or other output device (not shown) that is associated with the measurement system 14.
While the flow diagram of
In alternative embodiments of the extrapolation system 10, step 2 and step 4 each include acquiring more than one orientation-referenced image at one or more positions or orientations. For example, the images I1, I2 may each be the result of multiple image acquisitions at the first position P1 and the second position P2, respectively. In another example, the processing of step 6 includes processing three or more orientation-referenced images acquired at corresponding multiple positions to provide the output parameter 11 extrapolated to a position P3 that is remote from each of the three or more positions.
While the embodiments of the present invention have been illustrated in detail, it should be apparent that modifications and adaptations to these embodiments may occur to one skilled in the art without departing from the scope of the present invention as set forth in the following claims.