The present disclosure generally relates to a solar panel handling system, and more particularly, to a system and method for installation of solar panels on installation structures.
In the discussion that follows, reference is made to certain structures and/or methods. However, the following references should not be construed as an admission that these structures and/or methods constitute prior art. Applicant expressly reserves the right to demonstrate that such structures and/or methods do not qualify as prior art against the present invention.
Installation of a photovoltaic array typically involves affixing solar panels to an installation structure. This underlying support provides attachment points for the individual solar panels, as well as assists with routing of electrical systems and, when applicable, any mechanical components. Because of the fragile nature and large dimensions of solar panels the process of affixing solar panels to an installation structure poses unique challenges. For example, in many instances the solar panels of a photovoltaic array are installed on a rotatable structure which can rotate the solar panels about an axis to enable the array to track the sun. In such instances, it is difficult to ensure that all of the solar panels in an array are coplanar and leveled relative to the axis of the rotatable structure. Additionally, the installation costs for photovoltaic array can be a considerable portion of the total build cost for the photovoltaic array. Thus, there is a need for a more efficient and reliable solar panel handling system for installing solar panels in photovoltaic array. Conventional computer vision techniques may be used when the environment is ideal. However, glare, over- or under-exposure can negatively affect object detection algorithms.
Use of solar panels is particularly suited to tropical and/or equatorial installations, which are ideal for sun availability, but are difficult working environments. Thus, there is a need for autonomous, robotic solutions to install solar panels in such environments. But to do so, also means improvements in robotic installation are needed, such as those related to one or more of guiding and navigating automatic picking of panels from shipping or storage containers, such as a crate, and correct placement on installation hardware, such as a torque tube, and in alignment with any previously placed panel(s), and detection (and avoidance) of potential mechanical structures, jigs and fixtures, like clamps and fan gears.
Accordingly, the present invention is directed to a solar panel handling system that substantially obviates one or more of the problems due to limitations and disadvantages of the related art.
The solar panel handling system disclosed herein facilitates the installation of solar panels of a photovoltaic array on a pre-existing installation structure such as, for example, a torque tube. Installing solar panels can be made more efficient and reliable by combining tooling for handling the solar panel with components that enable mating of the solar panel to the solar panel support structure. Some embodiments use machine learning techniques to overcome environmental inconsistencies. The system can learn from examples with glare and illumination issues, and can generalize to new data during inference.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be apparent from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
To achieve these and other advantages and in accordance with the purpose of the present invention, as embodied and broadly described, a system for installing a solar panel may comprise an end of arm assembly tool comprising a frame and suction cups coupled to the frame, and a linear guide assembly coupled to the end of arm assembly tool, wherein the linear guide assembly includes: a linearly moveable clamping tool including an engagement member configured to engage a clamp assembly slidably coupled to an installation structure, a force torque transducer configured to move the clamping tool along the installation structure, and a junction box coupled to the frame and including a controller configured to control the force torque transducer and the suction cups, and a power supply.
In another aspect, a method of installing a solar panel may comprise engaging an end of arm assembly tool with a solar panel, the end of arm assembly tool comprising a frame and suction cups coupled to the frame, positioning the solar panel relative to an installation structure having a clamp assembly slidably coupled thereto, engaging a linear guide assembly coupled to the end of arm assembly tool with the clamp assembly, the linear guide assembly comprising a linearly moveable clamping tool including an engagement member configured to engage the clamp assembly and a force torque transducer configured to move the clamping tool along the installation structure, and actuating the force torque transducer to move the clamp assembly along the installation structure so as to engage with a side of the solar panel, thereby fixing the solar panel relative to the installation structure.
In another aspect, a method of installing a solar panel may comprise using machine learning algorithms to automatically detect the center and corners of the solar panel (of varying sizes) to direct a robot to accurately pick and place the solar panel. In some embodiments, the method includes isolating the center and corners of the solar panels, and placing the solar panels in relation to a mounting, such as a torque tube, and a previously placed panel. In some embodiments, the method includes detecting ancillary equipment or structures, such as clamps and/or fan gears. In some embodiments, the techniques described herein do not use synthetic imagery and is thus a simulation based approach that employs a predictor-corrector schema with possible emulation.
In another aspect, a method of installing a solar panel may include obtaining a first image of a solar panel during an in-progress solar installation. The method also includes estimating a plurality of features of the solar panel based on the first image using distance simulation, geometric correction and angular adjustment and generating a first set of control signals, based on the estimated plurality of features, for operating a first robotic controller for picking the solar panel. The method also includes obtaining a second image of the solar panel when the solar panel is in a perspective view and detecting placement of the solar panel based on the second image by determining if the solar panel is co-planar with and at a predetermined offset from a fixed solar panel, i.e., an already mounted solar panel. The method also includes generating a second set of control signals, based on the detected placement, for operating a second robotic controller for aligning the solar panel with the fixed solar panel.
In another aspect, a method of installing a solar panel may include obtaining a first image of a solar panel in a staging area using a viewing camera. The method also includes estimating a plurality of areas/features of the solar panel based on the first image using at least one of distance simulation, geometric correction and angular adjustment. The method also includes generating a first set of control signals for operating a first robotic controller for picking the solar panel, where the first set of control signals are based on one or more of the estimated plurality of areas/features. The method also includes obtaining a second image of the solar panel when the solar panel is picked and is in a perspective orientation relative to the viewing camera and detecting an orientation in space of the picked solar panel based on the second image. The method also includes generating a second set of control signals, based on the detected orientation, for moving the picked the solar panel to an installation position. The installation position aligns the picked solar panel with a previously installed solar panel. The floating/picked panel is moved into place to align with the fixed panel on the torque tube. “Moved into place” means that the floating panel is directionally, distance wise and orientation wise, aligned with the fixed (reference) panel.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory and are intended to provide further explanation of the invention as claimed.
The accompanying drawings, which are incorporated herein and form part of the specification, illustrate the present invention and, together with the description, further serve to explain principles of the invention and to enable a person skilled in the relevant arts to make and use the invention. The exemplary embodiments are best understood from the following detailed description when read in conjunction with the accompanying drawings. It is emphasized that, according to common practice, the various features of the drawings are not to scale. On the contrary, the dimensions of the various features are arbitrarily expanded or reduced for clarity. Included in the drawings are the following figures:
The features and advantages of the present invention will become more apparent from the detailed description set forth below when taken in conjunction with the drawings, in which like reference characters identify corresponding elements throughout. In the drawings, like reference numbers generally indicate identical, functionally similar, and/or structurally similar elements.
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings.
The end of arm assembly tool 100 may include a frame 102 and one or more attachment devices 104 coupled to the frame 102. Example attachment devices 104 include suction cups or other structures that can be releasably attached to the surface of the solar panel 120 and, at least in the aggregate, maintain attachment during manipulation of the solar panel 120 by the end of arm assembly tool 100. The frame 102 may consist of several trusses 102-A for providing structural strength and stability to the frame 102. The frame 102 also functions as a base for the end of arm assembly tool 100 and other related components of the solar panel handling system disclosed herein.
Other related components of the solar panel handling system disclosed herein may be coupled to the frame 102 so as to fix a relative position of the components on the end of arm assembly tool 100. One or more of the various components of the solar panel handling system may be coupled to one or more of the trusses 102-A so as to fix a relative position of the components on the end of arm assembly tool 100.
The attachment devices 104 are configured to reliably attach to a planar surface such, as for example, a surface of a solar panel, such as by using vacuum. In a suction cup embodiment, the suction cups can be actuated by pushing the cup against the planar surface, thereby pushing out the air from the cup and creating a vacuum seal with the planar surface. As a consequence, the planar surface adheres to the suction cup with an adhesion strength that is dependent on the size of the suction cup and the integrity of the seal with the planar surface. In some embodiments, the suction cups engage with the solar panel to create an air-tight seal, and then a vacuum pump sucks the air out of the suction cups, generating the vacuum required for the proper adhesion to the solar panel. In some embodiments, an air inlet (not shown) provides air onto the planar surface when the planar surface is sealed to the suction cup so as to deactivate the vacuum and release the planar surface from the suction cup.
The system may further include a linear guide assembly 106 coupled to the end of arm assembly tool 100. The linear guide assembly 106 includes a linearly movable clamping tool 108 with an engagement member 108-A configured to engage a clamp assembly coupled to an installation structure. The linear guide assembly 106 can be actuated to move the clamping tool 108 along an axis between, for example, an extended position and a retracted position. The axis of movement of the clamping tool 108 may be parallel to an axis of the installation structure. Thus, the linear guide assembly 106 can move the clamping tool 108 and the engagement member 108-A along the installation structure.
In some embodiments, the engagement member 108-A may include electromagnets which may be actuated to grasp a clamp assembly 602 (see
The linear guide assembly 106 is actuated using a force torque transducer 110. In some embodiments, the linear guide assembly 106 and the force torque transducer 110 may form a rack and pinion structure such that the rotation of the force torque transducer 110 results in advancement or retraction of the clamping tool 108. In some embodiments, the linear guide assembly 106 may be a hydraulic assembly including a telescoping shaft coupled to the clamping tool 108. In such embodiments, the force torque transducer 110 may be configured in the form of a pump for pumping a hydraulic fluid. In other embodiments, the force torque transducer 110 may be configured in the form of or coupled to a liner drive motor that engages a surface of the telescoping shaft coupled to the clamping tool 108.
In some embodiments, the linear guide assembly 106 may include an electric rod actuator to move the clamping tool 108 parallel to an axis of the installation structure.
In some embodiments, the guide assembly 106 may include a roller 606 to facilitate the movement of the clamping tool 108 along the installation structure 604. The roller may, for example, include a bearing or other components designed for reducing friction while the clamping tool 108 moves relative to the installation structure. The roller may be coupled with a sensor, such as by a force sensor or rotation sensor, to provide feedback to a controller.
In some embodiments, the guide assemble may include a spring mechanism 608 that enables small amounts of tilting (up to 15 degrees of tilt) of the clamping tool 108 relative to the installation structure 604. Such tilting may occur when the orientation assembly 804 tilts the end of arm assembly tool 100 relative to the installation structure 604 in order to appropriately level the solar panel.
The system may further include a junction box 112 coupled to the frame 102. The junction box 112 may include a controller configured to control the force torque transducer 110 and the attachment devices 104. In some embodiments, the junction box 112 may also include a power supply or a power controller for controlling the power supply to various components.
In some embodiments, the controller 112 may include a processor operationally coupled to a memory. The controller 112 may receive inputs from sensors associated with the solar panel handling system (e.g., an optical sensor or a proximity sensor 108-B described elsewhere herein). The controller 112 may then process the received signals and output a control command for controlling one or more components (e.g., the linear guide assembly 106, the clamping tool 108, or the attachment devices 104). For example, in some embodiments, the controller 112 may receive a signal from a proximity sensor determining that the clamp assembly is approaching a trailing edge of a solar panel being installed and accordingly reduce the speed of the linear guide assembly 106 to reduce excessive forces and impacts on the solar panel.
Referring to
In some embodiments, one or more sensors, such as optical sensors 802, may be used to detect and recognize objects to position and control the installation with improved accuracy. The sensor(s) may be implemented together with a neural network of, for example, an artificial intelligence (AI) system. For example, a neural network can include acquiring and correcting images related to the solar panel handling system, the solar panels (both installed and to be installed), and the installation environment (both natural environment, such as topography, and installed equipment, such as structures related to the solar panel array). Also, for example, a neural network can include acquiring and correcting positional or proximity information. The corrected images and/or the corrected positional or proximity information are input into the neural network and processed to estimate movement and positioning of equipment of the solar panel handling system, such as that related to autonomous vehicles, storage vehicles, robotic equipment, and installation equipment. The estimated movement and positioning are published to a control system associated with the individual equipment of the solar panel handling system or to a master controller for the solar panel handling system as a whole.
In some embodiments, the signal from the optical sensor may be input to the controller. In some embodiments, the solar panel handling system may further include an orientation assembly 804 (see
In some embodiments, the controller 112 may also be configured to control the attachment devices 104 so as to activate or deactivate the attachment/detachment thereof. For embodiments in which the attachment devices 104 are suction cups, a vacuum can enable coupling or release of the solar panels 120 with the end of arm assembly tool 100.
In some embodiments, the installation structure 604 may have an octagonal cross-section, as shown, e.g., in
In some embodiments, the assembly tool 100 may be configured to couple with an assembly moving robot 903 (an example of which is shown in
Referring now to
Once the solar panel is in position on the installation structure, the force torque actuator 110 actuates the guide assembly 106 of the end of arm assembly tool 100 to contact the engagement member 108-A of the clamping tool 108 with a clamp assembly 602. This clamp assembly was originally positioned on the installation structure outside the area to be occupied by the solar panel being installed, but also sufficiently close so as to be reached by the relevant components of the end of arm assembly tool 100. Surfaces and features of the engagement member 108-A may be located and sized so as to mate with complimentary features on the clamp assembly 602. After this contact, the force torque actuator 110 is actuated (either continued to be actuated or actuated in a second mode) to axially slide the clamp assembly 602 along a portion of the length of the installation structure 604. Axially sliding of the clamp assembly 602 engages a receiving channel of the clamp assembly 602 with the trailing edge of the just installed solar panel. Sensors, such as in the force torque actuator 110 or in the clamping tool 108, can provide feedback to the controller indicating full engagement of the receiving channel of the clamp assembly 602 with the trailing edge of the solar panel. Once the clamp assembly 602 is positioned, the guide assembly 106 is retracted and installation of the next solar panel can occur.
In some embodiments, the linear guide assembly 106 may include a proximity sensor 108-B configured to sense a distance between the engagement member 108 and the trailing edge of the solar panel 120 during an operation of installation of the solar panel 120. An output from the proximity sensor 108-B may be used to suitably control the speed of the clamping tool 108 during the operation of linear guide assembly 106 so as to avoid excessive forces and impacts on the solar panel 120. In some embodiments, the proximity sensor 108-B may be, for example, an optical or an audio sensor (e.g., sonar) that detects a distance between the leading edge of the solar panel 120 and the engagement member 108; in other embodiments, the proximity sensor 108-B may be a limit switch that is retracted by contact.
With further reference to
As shown
In accordance with
In some embodiments, the ground vehicle 907 may be an autonomous vehicle in which the neural network and artificial intelligence control the movement and operation and the module vehicles 1005 are towed or coupled to the ground vehicle 907. In other embodiments, the module vehicles 1005 may be an autonomous vehicle in which the neural network and artificial intelligence control the movement and operation and the ground vehicle 907 is towed or coupled to the module vehicles 1005. Also, in some embodiments, the assembly moving robot 903 is mounted on one of the ground vehicles 907 and the module vehicles 1005. In other embodiments, the assembly moving robot 903 can be mounted on a dedicated robot vehicle.
A process for installing the solar panels is shown in
As shown in
As one of ordinary skill in the art would recognize, modifications and variations in implementation may be used. For example, as shown in
In some embodiments, as illustrated in
In some embodiments, as illustrated in
In the replenishment operation using the example of a forklift, the forklift (whether autonomous, remote controlled or manually operated) may be used to return empty boxes or containers of the solar panels to a waste area, remove straps, open lids, or cut away box faces from boxes being delivered, pick up boxes to correct rotation/orientation of the solar panels, or other tasks. Further, the forklift may be maintained near the ground vehicle to wait for the system to deplete the next box of solar panels. Thus, the forklift may manually or autonomously discard a depleted box, position a next box on the ground vehicle or the module vehicle, open box (including removing straps, opening lids, or cutting away box faces) and back away from the ground vehicle/module vehicle. As described, the replenishment may be autonomous, remote controlled, or manually operated, for example.
Some embodiments perform solar panel segmentation by capturing images of solar panels and torque tubes under varying lighting conditions.
Some embodiments continuously collect images (and build datasets) and use the images for improving accuracy of the models. Some embodiments use human annotations to increase accuracy of the models. Some embodiments allow users to tune parameters of the segmentation model.
Some embodiments include separate models for semantic segmentation and instance segmentation.
Some embodiments continue to capture training images while installing solar panels.
The method also includes detecting (5004) solar panel segments by inputting the image to a trained neural network that is trained to detect solar panels in poor lighting conditions. Neural networks may be implemented using software and/or hardware (sometimes called neural network hardware) using conventional CPUs, GPUs, ASICs, and/or FPGAs. In some embodiments, the trained neural network comprises (i) a model for semantic segmentation for identifying a solar panel segment, and (ii) a model for instance segmentation for identifying a plurality of solar panel. In some embodiments, the trained neural network uses a Mask R-CNN framework for instance segmentation. The trained neural networks detect solar panel segments based on features extracted from an image of an in-progress solar installation. In some embodiments, the image obtained is input to the neural network through ROS (e.g., the input image goes from the OpenCV module to a neural network module (Detectron)). Example techniques for training the neural network are described below in reference to
The method also includes estimating (5006) panel poses for the one or more solar panels, based on the solar panel segments, using a computer vision pipeline. In some embodiments, the computer vision pipeline includes one or more computer vision algorithms for post-processing, Hough transform, filtering and segmentation of Hough lines, finding horizontal and/or vertical Hough line intersections, and panel pose estimation using predetermined 3D panel geometry and corner locations. In some embodiments, the computer vision pipeline locates the clamps and/or the center structures to estimate the panel poses. In some embodiments, the computer vision pipeline locates the one or more torque tubes and/or the clamp position to estimate the panel poses. In some embodiments, the computer vision pipeline locates the nut. After locating the nut, the socket wrench mounted on a smaller robotic arm may engage with the nut and tighten it to secure the panel in place. Before doing this step, the clamps may be loose and panels may fall off due to wind.
In some embodiments, estimating the panel poses is performed using conventional machine vision hardware for locating where panel(s) are in a 3-D space. In some embodiments, this is a rough identification of round edges, and is not intended to be very precise. Hough transform may be used subsequently to determine precise locations of edges, which is followed by extrapolation of edge lines of panels, determination of where panels cross, and identification of a panel corner. The panel corners are published to identify where the panel is with respect to the robot. For example, based on a panel geometry in 3-D, the panel's pose is calculated based on the location of corners of the panel in the image.
In some embodiments, for estimating the panel poses, the computer vision pipeline uses a PnP (Perspective-n-Point) solver with camera intrinsic parameters (it is aware of its own camera distortion and parallax). Then the extrinsic parameters capture the camera's position relative to the robot using the robotic arm and EOAT pose at the moment of image capture. The robot pose may be captured continuously with a time stamp. That time stamp may then be used to match the robot pose to the camera acquisition time stamp. In some embodiments, the computer vision pipeline uses a known pose of the robotic arm and end of arm tool (where the camera sits) at the time of image capture to calculate a position of one or more corners of a panel.
The method also includes generating (5008) control signals, based on the estimated panel poses, for operating a robotic controller for installing the one or more solar panels. In some embodiments, after the panel is found, the location is projected along the tube to seek clamp pixels to identify the clamp location (e.g., how far away the clamp is, how close it is for the clamp puller). Some embodiments use clamp positions to verify that clamps are within an allowable window required by the clamp puller on EOAT. Some embodiments use the center structures to determine sequence on whether to place one or two panels to avoid collisions with the fan gear. Some embodiments use panel position to make sure that the trailer is in a valid position relative to the tube so that robot is within reach of the work needed to perform. Some embodiments use the pose from the leading panel to then guide the lower robot in its fine tube acquisition, which drives the positions of the upper and lower robot for the panel place and the nut drive. In some embodiments, the fine tube acquisition described above uses a horizontal and vertical laser to create a profilometer system that finds the tube and the clamp positions. This refines the working pose from the coarse tube from 10-20 mm and reduces it to less than plus or minus 5 mm. At the first panel, the coarse tube error is within 5 mm, but as this is projected out, the errors grow and the fine tube is used to constrain that to under plus or minus 5 mm.
Synthetic images may be used for training machine learning algorithms for solar panel installation. However, synthetic images or other synthetic training data may be cost prohibitive, and may even be impossible to generate in some situations. Producing realistic images could take weeks of training, even on expensive hardware and may need extensive supervision. Generating useful synthetic data is typically a trial-and-error process. There are also risks associated with overtraining using synthetic data. Because artificial data is often used in areas where real-world data is scarce, there is a chance that the data generated might not accurately reflect real world scenarios. Given these constraints, there is a need for systems and methods techniques that do not rely on synthetic data. The techniques described herein are not site limited because images are not used for learning. The techniques are panel centric and the background profile do not affect correctness. Some embodiments use a predictor corrector mechanism whereby a center is predicted, tested for failure, and a failure is used as a feedback to improve a next estimate of the center.
Some embodiments use solar physics, sun's elevation, and/or azimuth to extract sun's relative position in the sky. Some embodiments estimate the glare based on the estimation of the sun's position. Some embodiments use high fidelity noise cancellation and image correction algorithm to false color the glare. Some embodiments identify the center and corners of the panel, detect wear on the torque tube and false color wear lines/scratches in shading (e.g., blue shading), place the panel, and/or detect structures, such as clamps and fan gears.
Some embodiments pick a solar panel by detecting corners and center of a panel to assist a top robot (sometimes referred to as a top robotic arm) to accurately pick the solar panel. Panels may come in different sizes. Some embodiments detect a placed panel, and assist a bottom robot for accurately aligning the panel relative thereto. Some embodiments detect a clamp location, and assist an operator with clamp movement. Panels are protected from impact. For example, some embodiments detect a fan gear location, and assist an operator in safe placement of the solar panel relative thereto. Panels are also guarded against collision. These aspects are described below in detail.
According to some embodiments, a first step for picking a solar panel is to estimate where a panel's center is. An upper robotic arm (e.g., the upper robot End-of-Arm Tooling (EOAT) 4406) may pick the panel using the center estimation. The solar panels may come in different sizes but have a same general shape—they are predominantly rectangular, even if the solar panels are sourced from different manufacturers. The algorithmic techniques described herein exploit the rectangular shape of the solar panel for estimating the center. The size can be different, meaning width and length of the solar panel can change, which affects where the center of the panel is and consequently where the corners of the panel are. Although described herein with respect to rectangular solar panels, the methods and techniques can be applied and or suitable modified to be utilized for other polygon-shaped solar panels.
After the center and the corners are identified, the upper arm may pick the solar panel and enter a place mode. In the place mode, the picked panel needs to be aligned to a torque tube and in relation to an existing (or previously placed) panel, avoiding collision with that panel. Some embodiments also detect other structures in the mounting environment, such as a clamp, to avoid collision with such structures. The placed panel cannot collide with the existing panel and has to be properly aligned with the existing panel so that once the alignment process is complete, a lower robotic arm (e.g., the lower robot EOAT 4408) steps up and tightens a screw on the clamp, according to some embodiments. So the upper robotic arm picks and places the solar panel, and the lower robotic arm ensures alignment, optionally be using a laser. In some embodiments, an image recognition software detects other structures in the mounting environment, such as a fan gear, an operator is notified so that the placement of the panel can be offset so as to avoid collision or impact with the structure.
During picking, placement and/or alignment, the sun's travel may have an impact. The sun's travel may depend on time of a day, and/or season. Some embodiments use sun's elevation and/or sun's azimuth. Azimuth is the angle of sun's impact on ground. Sun's elevation is where sun is located in the sky on a particular day, and may be particularly important for solar panel installation. Solar panels typically have a coating on top that is reflective by nature. Solar physics needs to be taken into account. High transmission is desired so that more photons are converted to electrons but excessive conversion results in underperformance of the solar panel. So to protect it from this unwanted irradiation, some embodiments include a thin film of coating that is applied on the solar panel that is reflective in nature. Due to this sun's travel on a particular day and due to the orientation of these solar panels during the picking, there is a lot of reflectance and glare seen on a panel. Consequently, when the camera captures that image, there are dark white spots in the image which impacts and throws off computer vision algorithms.
Another failure mode is, as the clamps have to be manually adjusted, the clamps have to be moved along the torque tube. This is physical movement of the clamps that involves metal on metal contact. The movement of the clamp on the torque tube may cause a scar or a graze on the torque tube. When the lower robotic arm looks at this, it reflects white on white background. The laser line that was supposed to be an indicator of alignment is focused on a metallic object so it reflects white. From the image perspective, because the background is also white, it is not possible to know where the laser lines are because the two colors merge. In some embodiments, this is circumvented using a colored (e.g., blue) tape so that there is a background that provides contrast. Because of these problems, there is need for non-synthetic data based systems. The techniques described herein provide similar functionality as those described above for picking, placement and alignment, avoiding collision and/or impact by using geometry.
Referring to Figure MA, suppose point A is determined to be an internal point. Two normal lines AB and AC are dropped to the longest visible horizontal line 5106 and the longest visible vertical line 5108, respectively. The lines are not normal to the panel, meaning that α and β (angles) where the normal lines intersect the longest visible horizontal and vertical lines are where the angles subtended by the rays are not going to be 90 degrees. This is because these angles will be 90 degrees only if the panel was aligned with the image. This can happen if the longest visible horizontal line and the image boundary are parallel to each other. Since that is not the typical case, —a and β will assume values that are either less than or greater than 90 degrees. Some embodiments repeat these steps for all internal points to create a matrix.
Some embodiments identify a subset of points that satisfy the following properties: (a) horizontal distance (dhorizontal) greater than or equal to 45% (or the highest value) of panel width, which may be user input or width calculated earlier; and (b) vertical distance (dvertical) greater than or equal to 45% (or the highest value) of panel height, which may be user input or height calculated earlier. This use of tolerance levels is important because the solar panels may be sourced from different vendors and the panel width is not a fixed number. The 45% value is used for illustration purposes, and other used-defined values can be used. Various embodiments may use different percentage values depending on tolerance for errors or need for accuracy.
Some embodiments calculate both horizontal angle (α) and vertical angle (β) of intersection from each qualifying point with the longest visible Hough lines. Some embodiments correct distances estimated with the calculated angles. Angles α and β may need to be normalized, meaning if the distances are evaluated as vertical or horizontal distance, the distances need a correction based on 90 minus α or 90 plus α and/or 90 minus β and 90 plus β. β equal to 0 would mean α equal to 90 degrees. So the α and β angles are used as guides.
Some embodiments isolate the subset of points that satisfy the following properties: (a) horizontal distance is greater than a predetermined percentage (e.g., 48%) or the highest value of panel width, and (b) vertical distance is greater than a predetermined percentage (e.g., 48%) or the highest value) of panel height. Point A will asymptotically approach panel center P when dvertical is approximately 50% of panel height, dhorizontal is 50% of panel width, and/or α and β are each approximately 90 degrees. These are sometimes referred to as representative points. These points are closer to the center of the solar panel, correspond to the bubbles shown on the circle (circle of equal probability or CEP). These points are spread or distributed in a circular format. The points of interest for estimating the geometric center are (i) interior to the solar panel, (ii) proximate to the longest visible horizontal and vertical Hough lines, and (iii) form a near normal orientation with the longest visible horizontal and vertical Hough lines. 50% would mean center, so the points meeting at least 48% have a CEP error or circular error probability less than 2%. As long as the points evaluated satisfy the 48% distance condition, the points are within 2% of the geometric center, but the points are still a set of points. They are not one single point. Some embodiments determine the centroid of the set of points to identify the center of the panel. Some embodiments use Pythagoras' theorem to estimate the corner of the panel in both directions. Some embodiments correct the distance with the angles, to determine corners of the panel. The 48% value is used for illustration purposes, and other used-defined values can be used. Various embodiments may use different percentage values depending on tolerance for errors or need for accuracy.
The center of the panel can be used to control attaching devices, such as suction cups (e.g., 8 suction cups), on an upper robotic arm. The suction cups may be symmetrically distributed on the robotic arm. Centering the robotic arm on the solar panel avoids misalignment in picking the solar panel. If the solar panel does not get picked in the right orientation, meaning if the picking is creating a tilt, placing the panel takes a lot longer and then the panel has to be aligned next to a previously placed panel.
A goal may be to complete the picking process and placing process in under 60 seconds.
Some embodiments use CEP techniques to estimate the center based on proximity and perpendicularity and may not include all the steps described above. CEP is a circle but is not perfect geometry. Since points that form this circle are determined by simulation, they will form a theoretical circle, which can be visualized based on the best fit geometry. Hence, the center of the panel estimated will be the centroid of these points. CEP implies that some embodiments afford a predetermined percentage (e.g., 2%) deviation from the actual center. Perpendicularity is the condition that ensures that every candidate point used to form the CEP is near normal to the longest visible Hough lines, with glare corrected and/or symmetry used or energy line based on pixel created.
Some embodiments use Shi-Tomasi methods, and/or Harris method, for estimating center and/or corners of a solar panel.
Steps described above for center and/or corner estimation may be used for a stationary panel, a picked panel and/or a previously placed panel.
A next step is to place the picked panel. Panel placement energy and proximity balances the picked panel and a previously placed panel. In some embodiments, images are captured in perspective. Some embodiments compare the fixed panel and the moving panel dynamically (e.g., when the picked panel is moving in the air). Different elevations between the picked panel and the placed panel (the reference) creates a potential. In some embodiments, placement is completed when 6 degrees of freedom (3 rotational and 3 translational) are aligned for the two panels.
Referring to
Some embodiments detect structures, e.g., clamp location and/or fan gear location, to assist an operator with clamp movement and/or in safe placement of a solar panel.
The algorithms described above for picking, placement, and detection of structures, e.g., the clamp and fan gear, are independent of each other. Various embodiments use one or more of the algorithms or combinations thereof for solar panel installations. Some embodiments combine one or more of these methods with conventional methods for solar panel installation.
The techniques described herein have several advantages over conventional techniques. For example, systems and methods according to the techniques described herein obtain a recommendation for pick and place using non-synthetic imaging schema and hence provide scalability to any terrain and conditions. Closed-loop feedback helps auto-correct and improve the estimation process. The simulation procedure is condition independent. Some embodiments take into account optical and geometric properties to accommodate variations in environment and manufacturer specifications. Some embodiments use pixel energy and pixel color density components during the estimation process.
The method also includes estimating (5704) a plurality of features of the solar panel based on the first image using distance simulation, geometric correction and angular adjustment. In some embodiments, the plurality of features include interior points, center, corners and/or edges of the solar panel. In some embodiments, an initial set of features may be estimated, and the other features may be calculated based on the estimation.
In some embodiments, estimating the plurality of features of the solar panel further includes, in accordance with a determination that a panel identifier, such as a bar code label, is visible in the first image, detecting a location of the panel identifier; and using the location to estimate the plurality of features of the solar panel. For example, the panel identifier may be placed on the rim of the solar panel at one of the edges. In some instances, a panel identifier like a bar code can be detected based on cluster of Hough lines. If the density of vertical Hough lines in an area is high, it represents existence of the bar code.
In some embodiments, estimating the plurality of features of the solar panel includes: identifying and plotting (e.g., drawing of the Hough line on the detected edge boundary), based on the first image, a longest visible horizontal line and a longest visible vertical line signifying edges of the solar panel; for each point in the first image that is internal to the solar panel in the first image, calculating horizontal and vertical distance from the respective point to the longest visible horizontal line and the longest visible vertical line, respectively; and identifying, based on the calculated horizontal and vertical distances, a subset of points for which: (i) horizontal distance is greater than a first predetermined percentage (e.g., 45%, or the highest value) of width of the solar panel, and (ii) vertical distance is greater than a second predetermined percentage (e.g., 45% or the highest value) of height of the solar panel; calculating horizontal angles of intersection and vertical angles of intersection from each point in the subset of points with the longest visible horizontal line and the longest visible vertical line, respectively; correcting the calculated horizontal and vertical distances based on the calculated horizontal angles of intersection and vertical angles of intersection, to obtain corrected horizontal and vertical distances (range is the difference between orthogonality and the calculated angle); identifying, based on the corrected horizontal and vertical distances, a candidate set of points for which: (i) horizontal distance is greater than a first predetermined percentage (e.g., 48%, or the highest value) of width of the solar panel, and (ii) vertical distance is greater than a second predetermined percentage (e.g., 48% or the highest value) of height of the solar panel; computing a centroid of the candidate set of points to obtain a center of the solar panel; estimating candidate corners of the solar panel using Pythagorean theorem based on the centroid and the candidate set of points; and correcting a distance for the candidate corners based on the calculated horizontal and vertical angles of intersection to obtain final corners of the solar panel. Examples of these steps are described above in reference to
In some embodiments, the identifying and plotting are performed using Hough transform. Longest visible horizontal line and a longest visible vertical line are Hough lines. Hough lines are good indicators for geometric feature detection. Both standard and probabilistic Hough procedures may be used in estimation. Parameters for these methods can be modified for the particular application. Hough lines are used to detect straight lines in an image. Straight lines can be defined mathematically via (i) a slope and offset, (ii) angle and radius, and (iii) extreme points (e.g., beginning and end coordinates of the line segment). In the standard formulation of Hough transform, angle and radius vector are estimated as parameters in the polar coordinate system; this generates two degrees of freedom that are modifiable. In the probabilistic formulation of Hough transform, coordinates of the end points are estimated as parameters in the cartesian coordinate system; this generates four degrees of freedom that are modifiable. Other techniques to detect straight lines can be used in place of Hough transform techniques.
In some embodiments, the method further includes determining if a point is internal to the solar panel by determining if a ray that originates from the point intersects the longest visible horizontal line and the longest visible vertical line in odd number of points.
In some embodiments, the method further includes: prior to estimating the plurality of areas/features of the solar panel: extracting the sun's relative position in the sky using, for example, solar physics, the sun's elevation and/or azimuth; estimating a glare based on the estimation of the sun's position; and using noise cancellation and image correction algorithm to false color the glare. Glare is characterized by existence of high density white pixels in the image, which would be the noise in this particular application. Noise cancellation would thus mean, removing this cluster of white pixels with the color of the panel. Some embodiments use filtering techniques, such as Gaussian or wavelet or Kalman or false coloring for this purpose.
In some embodiments, the method further includes: prior to estimating the plurality of areas/features of the solar panel: using image masking and segmentation or filtering or image correction techniques to remove sun's glare on the solar panel. Example methods for masking and/or segmentation are described above in reference to
Referring back to
The method also includes obtaining (5708) a second image of the solar panel when the solar panel is in a perspective view. An example of a perspective view is described above in reference to
The method also includes detecting placement (5710) of the solar panel based on the second image by determining if the solar panel is co-planar with and at a predetermined offset from a fixed solar panel. In some embodiments, the solar panel and the fixed solar panel are substantially rectangular in shape. In some embodiments, the solar panel and the fixed solar panel are substantially similar in shape. In some embodiments, the picked solar panel and the fixed solar panel are substantially similar in shape along only one side, and the system may use symmetry for the calculations and/or estimations described herein. For example, when glare is not present along the longest visible Hough line, symmetry allows to scale because of the panel geometry being regular. The solar panel is rectangular and hence distance and angle measurements are always known for the non-visible side. However, when glare partially or totally covers the longest “visible” Hough line, then an energy line is drawn to complement the geometry line to estimate the length of the longest Hough line.
In some embodiments, detecting the placement of the solar panel includes obtaining a third image of the fixed solar panel when the fixed solar panel is in a perspective view. For example, when the near edge appears longer than the farther edge, the image is in perspective view. Some embodiments record images continuously. In some embodiments, detecting the placement of the solar panel also includes identifying two centers including (i) a center of the solar panel when the solar panel is floating based on the second image and (ii) a center of the fixed solar panel based on the third image. In some embodiments, detecting the placement of the solar panel also includes drawing a line between the two centers as a function of time; establishing Euler angles and radius vector at different points in time (e.g., at each point in time). In some embodiments, detecting the placement of the solar panel also includes generating one or more control signals, for operating the first robotic controller, to move the solar panel such that 5 degrees of freedom (in relation to the fixed solar panel) approach a numerical value of 0. In some embodiments, detecting the placement of the solar panel also includes confirming coplanarity of points and Hough lines of the solar panel and the fixed solar panel, using computer vision (e.g., using edge detection, interest point detection, contour mapping, bounding-box method). Examples of these steps are described above in reference to
The method also includes generating (5712) a second set of control signals, based on the detected placement, for operating a second robotic controller for aligning the solar panel with the fixed solar panel. In some embodiments, generating the second set of control signals includes generating control signals to slide the solar panel when the solar panel is floating such that distance between the two centers is approximately equal to a predetermined safe offset distance plus widths of each panel. The second robotic controller and the first robotic controller may be different controllers or same controllers. The controllers may control the upper robotic arm and/or the lower robotic arm, in various embodiments. In some embodiments, generating the second set of control signals includes generating control signals for securing the solar panel (e.g., by positioning and tightening the clamp).
In some embodiments, the method further includes detecting a supporting mechanical equipment. This may include obtaining a fourth image of the solar panel in a second perspective view or in a top view. This could be a different perspective view from the other perspective view described earlier in reference to step 5708. In some embodiments, detecting the supporting mechanical equipment includes converting the fourth image to grayscale to obtain a grayscale image; denoising and applying a bilateral filter to the grayscale image to obtain a processed image. In some embodiments, detecting the supporting mechanical equipment includes eroding and thresholding the processed image to obtain a candidate image; drawing convex hulls for the candidate image. In some embodiments, detecting the supporting mechanical equipment includes determining external contours above a predetermined length based on the convex hulls; dilating edges for the candidate image. In some embodiments, detecting the supporting mechanical equipment includes blurring and applying Canny edge detection for the candidate image; testing for convex hull in orthogonal direction. In some embodiments, detecting the supporting mechanical equipment includes, in accordance with a determination that multiple convex hulls are created and orthogonality is observed, detecting a supporting mechanical equipment. These steps may generally include image processing, image preparation, and/or image conditioning. Examples of these steps are described above in reference to
In some embodiments, the supporting mechanical equipment includes a clamp (sometimes referred to as a clamp assembly, e.g., the clamp assembly 602,
In some embodiments, determining if the solar panel is co-planar with and at a predetermined offset from the fixed solar panel includes determining if (i) the solar panel is on a same plane as a fixed solar panel and (ii) the solar panel and the fixed panel are near flush in a lateral direction.
Embodiments of the present invention have been described above with the aid of functional building blocks illustrating the implementation of specified functions and relationships thereof. The boundaries of these functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternate boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed.
It will be apparent to those skilled in the art that various modifications and variations can be made in the system for installing a solar panel of the present invention without departing from the spirit or scope of the invention. Thus, it is intended that the present invention cover the modifications and variations of this invention provided they come within the scope of the appended claims and their equivalents. It is to be understood that the phraseology or terminology herein is for the purpose of description and not of limitation, such that the terminology or phraseology of the present specification is to be interpreted by the skilled artisan in light of the teachings and guidance.
The breadth and scope of the present invention should not be limited by any of the above-described exemplary embodiments, but should be defined only in accordance with the following claims and their equivalents.
This application is based on and claims priority under 35 U.S.C. § 119 to U.S. Provisional Application No. 63/397,125, filed Aug. 11, 2022, the entire contents of which is incorporated herein by reference.
Number | Date | Country | |
---|---|---|---|
63397125 | Aug 2022 | US |