Embodiments of the present invention relate to providing an overhead view of detected physical objects located around an industrial machine, such as an electric rope or power shovel.
Industrial machines, such as electric rope or power shovels, draglines, etc., are used to execute digging operations to remove material from, for example, a bank of a mine. An operator controls a rope shovel during a dig operation to load a dipper with material. The operator deposits the material from the dipper into a haul truck. After depositing the material, the dig cycle continues and the operator swings the dipper back to the bank to perform additional digging.
As the dipper moves, it is important to have a clear swing path to avoid impact with other objects. For example, the dipper can impact the haul truck or other equipment in the swing path. The dipper can also impact the bank, the ground, other portions of the shovel, and/or other objects located around the shovel. The impact, especially if strong, can cause damage to the dipper and the impacted object. In addition, the impact can cause damage to other components of the shovel.
Accordingly, embodiments of the invention provide systems and methods for detecting and mitigating shovel collisions. To detect collisions, the systems and methods detect objects within an area around a shovel. After detecting objects, the systems and methods can optionally augment control of the shovel to mitigate the impact of possible collisions with the detected objects. When mitigating a collision, the systems and methods can provide alerts to the shovel operator using audible, visual, and/or haptic feedback.
In particular, one embodiment of the invention provides a system for providing an overhead view of an area around a shovel. The system includes at least one processor. The at least one processor is configured to receive data from at least one sensor installed on the shovel, wherein the data relates to the area around the shovel, identify a plurality of planes based on the data, and determine if the plurality of planes are positioned in a predetermined configuration associated with a haul truck. If the plurality of planes are positioned in the predetermined configuration, the at least one processor is configured to superimpose the plurality of planes on an overhead-view image of the shovel and the area.
Another embodiment of the invention provides a method of providing an overhead view of an area around an industrial machine. The method includes receiving, at at least one processor, data from at least one sensor installed on the industrial machine, wherein the data relating to the area around the industrial machine. The method also includes identifying, by the at least one processor, a plurality of planes based on the data, determining, by the at least one processor, if the plurality of planes are positioned in a predetermined configuration associated with a predetermined physical object, and, if the plurality of planes are positioned in the predetermined configuration, superimposing the plurality of planes on an overhead-view image of the industrial machine and the area.
Other aspects of the invention will become apparent by consideration of the detailed description and accompanying drawings.
The patent or application file contains at least one drawing executed in color. Copies of this patent or patent application publication with color drawing(s) will be provided by the Office upon request and payment of the necessary fee.
Before any embodiments of the invention are explained in detail, it is to be understood that the invention is not limited in its application to the details of construction and the arrangement of components set forth in the following description or illustrated in the following drawings. The invention is capable of other embodiments and of being practiced or of being carried out in various ways. Also, it is to be understood that the phraseology and terminology used herein is for the purpose of description and should not be regarded as limiting. The use of “including,” “comprising” or “having” and variations thereof herein is meant to encompass the items listed thereafter and equivalents thereof as well as additional items. The terms “mounted,” “connected” and “coupled” are used broadly and encompass both direct and indirect mounting, connecting and coupling. Further, “connected” and “coupled” are not restricted to physical or mechanical connections or couplings, and can include electrical connections or couplings, whether direct or indirect. Also, electronic communications and notifications may be performed using any known means including direct connections, wireless connections, etc.
It should also be noted that a plurality of hardware and software based devices, as well as a plurality of different structural components may be used to implement the invention. In addition, it should be understood that embodiments of the invention may include hardware, software, and electronic components or modules that, for purposes of discussion, may be illustrated and described as if the majority of the components were implemented solely in hardware. However, one of ordinary skill in the art, and based on a reading of this detailed description, would recognize that, in at least one embodiment, the electronic based aspects of the invention may be implemented in software (e.g., stored on non-transitory computer-readable medium) executable by one or more processors. As such, it should be noted that a plurality of hardware and software based devices, as well as a plurality of different structural components may be utilized to implement the invention. Furthermore, and as described in subsequent paragraphs, the specific mechanical configurations illustrated in the drawings are intended to exemplify embodiments of the invention and that other alternative mechanical configurations are possible. For example, “controllers” described in the specification can include standard processing components, such as one or more processors, one or more computer-readable medium modules, one or more input/output interfaces, and various connections (e.g., a system bus) connecting the components.
The shovel 100 also includes taut suspension cables 150 coupled between the base 110 and boom 130 for supporting the dipper shaft 130; a hoist cable 155 attached to a winch (not shown) within the base 110 for winding the cable 155 to raise and lower the dipper 140; and a dipper door cable 160 attached to another winch (not shown) for opening the door 145 of the dipper 140. In some instances, the shovel 100 is a P&H® 4100 series shovel produced by P&H Mining Equipment Inc., although the shovel 100 can be another type or model of electric mining equipment.
When the tracks 105 of the mining shovel 100 are static, the dipper 140 is operable to move based on three control actions, hoist, crowd, and swing. Hoist control raises and lowers the dipper 140 by winding and unwinding the hoist cable 155. Crowd control extends and retracts the position of the handle 135 and dipper 140. In one embodiment, the handle 135 and dipper 140 are crowded by using a rack and pinion system. In another embodiment, the handle 135 and dipper 140 are crowded using a hydraulic drive system. The swing control swivels the handle 135 relative to the swing axis 125. During operation, an operator controls the dipper 140 to dig earthen material from a dig location, swing the dipper 140 to a dump location, release the door 145 to dump the earthen material, and tuck the dipper 140, which causes the door 145 to close, and swing the dipper 140 to the same or another dig location.
As described above in the summary section, as an operator swings the dipper 140, the dipper 140 can collide with other objects, such as a haul truck 175 (e.g., the bed 176 of the haul truck 175) and other components of the shovel 100 (e.g., the tracks 105, a counterweight located at the rear of the shovel 100, etc.). These collisions (e.g., metal-on-metal impacts) can cause damage to the dipper 140, the shovel 100, and the impacted object. Therefore, the shovel 100 includes a controller that detects objects and augments control of the dipper 140 to mitigate a collision between the dipper 140 and a detected object.
The controller includes combinations of hardware and software that are operable to, among other things, monitor operation of the shovel 100 and augment control of the shovel 100, if applicable. A controller 300 according to one embodiment of the invention is illustrated in
As described below in more detail, the detection module 400 detects objects and provides information about detected objects to the mitigation module 500. The mitigation module 500 uses the information from the detection module 400 and other information regarding the shovel 100 (e.g., current position, motion, etc.) to identify or detect possible collisions and, optionally, mitigate the collisions. It should be understood that the functionality of the controller 300 can be distributed between the detection module 400 and the mitigation module 500 in various configurations. For example, in some embodiments, alternatively or in addition to the functionality of the mitigation module 500, the detection module 400 detects possible collisions based on detected objects (and other information regarding the shovel 100 received directly or indirectly through the mitigation module 500) and provides warnings to an operator. The detection module 400 can also provide information regarding identified possible collisions to the mitigation module 500, and the mitigation module 500 can use the information to automatically mitigate the collisions.
Separating the controller 300 into the detection module 400 and the mitigation module 500 allows the functionality of each module to be used independently and in various configurations. For example, the detection module 400 can be used without the mitigation module 500 to detect objects, detect collisions, and/or provide warnings to an operator. In addition, the mitigation module 500 can be configured to receive data from multiple detection modules 400 (e.g., each detection module 400 detects particular objects or a particular area around the shovel 100). Furthermore, by separating the controller 300 between the two modules, each module can be tested individually to ensure that the module is operating properly.
The computer-readable media 404 and 504 store program instructions and data. The processors 402 and 502 included in each module 400 and 500 are configured to retrieve instructions from the media 404 and 504 and execute, among other things, the instructions to perform the control processes and methods described herein. The input/output interface 406 and 506 of each module 400 and 500 transmits data from the module to external systems, networks, and/or devices and receives data from external systems, networks, and/or devices. The input/output interfaces 406 and 506 can also store data received from external sources to the media 404 and 504 and/or provide the data to the processors 402 and 502, respectively.
As illustrated in
The mitigation module 500 is also in communication with a number of shovel position sensors 380 to monitor the location and status of the dipper 140 and/or other components of the shovel 100. For example, in some embodiments, the mitigation module 500 is coupled to one or more crowd sensors, swing sensors, hoist sensors, and shovel sensors. The crowd sensors indicate a level of extension or retraction of the handle 135 and the dipper 140. The swing sensors indicate a swing angle of the handle 135. The hoist sensors indicate a height of the dipper 140 based on a position of the hoist cable 155. The shovel sensors indicate whether the dipper door 145 is open (for dumping) or closed. The shovel sensors may also include weight sensors, acceleration sensors, and inclination sensors to provide additional information to the mitigation module 500 about the load within the dipper 140. In some embodiments, one or more of the crowd sensors, swing sensors, and hoist sensors are resolvers that indicate an absolute position or relative movement of the motors used to move the dipper 140 (e.g., a crowd motor, a swing motor, and/or a hoist motor). For instance, for indicating relative movement, as the hoist motor rotates to wind the hoist cable 155 to raise the dipper 140, the hoist sensors output a digital signal indicating an amount of rotation of the hoist and a direction of movement. The mitigation module 500 translates these outputs to a height position, speed, and/or acceleration of the dipper 140.
As illustrated in
The detection module 400 is also in communication with a number of object detection sensors 390 for detecting objects. The sensors 390 can include digital cameras and/or laser scanners (e.g., 2-D or 3-D scanners). For example, in some embodiments, the sensors 390 include one or more SICK LD-MRS laser scanners. In other embodiments, alternatively or in addition, the sensors 390 include one or more TYSX G3 EVS AW stereo cameras. In embodiments where the sensors 390 include both laser scanners and cameras, the detection module 400 can use just the lasers scanners if the cameras are unavailable or are not functioning properly and vice versa. In some embodiments, the sensors 390 include at least three laser scanners. One scanner can be positioned on the left side (as viewed by a shovel operator) of the shovel 100 (to track dumping of material to the left of the shovel 100). A second scanner can be positioned on the right side (as viewed by a shovel operator) of the shovel 100 (to track dumping of material to the right of the shovel 100). A third scanner can be positioned on the rear of the shovel 100 to detect objects generally located behind the shovel 100 (e.g., that may collide with the counterweight at the rear of the shovel 100).
As noted above, the detection module 400 and the mitigation module 500 are configured to retrieve instructions from the media 404 and 504, respectively, and execute, among other things, the instructions related to perform control processes and methods for the shovel 100. For example,
Alternatively or in addition, the detection module 400 executes a global detection method that maps the location of detected objects in the shovel surroundings. The global detection method can focus on a larger, predetermined region-of-interest than the region-of-interest associated with the local detection method. The global detection method can also attempt to recognize specific objects. For example, the global detection method can determine whether a detected object is part of a haul truck, part of the ground, part of a wall, etc.
In some embodiments, the detection module 400 is configured to detect particular objects, such as haul trucks 175. To detect the trucks 175, the detection module 400 identifies planes based on the data from the sensors 390 (at 602). In particular, the detection module 400 can be configured to identify one or more horizontal and/or vertical planes in a configuration commonly associated with a haul truck 175. For example, as illustrated in
For example, as illustrated in
The lines 702 and 704 define various planes that make up the truck 175. In particular, as illustrated in
In addition, the header line 704, the front bounding line 702a, the far bounding line 702c, and the near bounding line 702d define a top header plane 716. The header line 704, the far bounding line 702c, and the near bounding line 702d also define a side header plane 718. Also, the header line 704, the far bounding line 702c, the near bounding line 702d, and the rear bounding line 702b define a bed plane 720.
The detection module 400 is configured to identify a set of one or more of the planes illustrated in
The detection module 400 uses the positions (and sizes) of identified planes to determine whether a detected object corresponds to a haul truck 175 (at 604). For example, in some embodiments, the detection module 400 is configured to detect planes from a point cloud in three-dimensional space (i.e., x-y-z). In particular, to identify planes, the module 400 initially removes all points below a predetermined height (i.e., below a predetermined z value). The module 400 then projects the remaining points onto a two-dimensional plane, which results in a binary two-dimensional image. The module 400 then performs blob detection on the binary two-dimensional image. Blob detection uses mathematical methods to detect regions within a digital image that differ in properties (e.g., brightness, color, etc.) from surrounding areas. Therefore, a detected region or “blob” is a region of a digital image in which some properties of the regions are constant or vary within a predetermined range of value (i.e., all points in the blob are similar).
After detecting all the blobs in the image, the detection module 400 eliminates any blobs that do not conform to a predetermined size (e.g., predetermined width/length ratio thresholds). The detection module 400 then performs line detection on each remaining blob to determine if the blob includes the four bounding lines 702 and the header line 704 commonly associated with a haul truck 175. If it does, the module 400 checks that the four bounding lines 702 form a rectangle (e.g., the front bounding line 702a and the rear bounding line 702b are parallel and perpendicular to the far bounding line 702c and the near bounding line 702d) and that the header line 704 is parallel to the front bounding line 702a and the rear bounding line 702b. Using the location of the four bounding lines 702 in the point cloud, the detection module 400 then determines the height of the lines 702 (i.e., the z value). If the height of the lines indicates that the lines properly define an approximately horizontal rectangle that fits the predetermined length/width ratio thresholds (i.e., no line is in an unexpected z plane), the module 400 projects each of the lines 702 and 704 in the height direction (i.e., z direction) to the ground to form a plane in three-dimensional space. In particular, the planes include the front plane 712, the far sidewall plane 706, the near sidewall plane 710, the rear plane 714, and the side header plane 718. The module 400 also projects a plane from the header line 704 to the front plane 712, which defines the top header plane 716. In addition, the module 400 projects a plane from the top height of the rear plane 714 to half of the height under the header line 704, which forms the bed plane 720.
After identifying the planes of the haul truck 175, the detection module 400 can define the position, size, and orientation of the haul truck 175 based on the planes. In some embodiments, the detection module 400 uses a grid to track the position, location, and orientation of identified objects (e.g., identified planes). The detection module 400 can provide the grid to the mitigation module 500, and the mitigation module 500 can use the grid to determine possible collisions between the dipper 140 and detected haul trucks 175 and, optionally, mitigate the collisions accordingly.
In some embodiments, the detection module 400 also defines volumes of exclusion based on the planes of identified haul trucks 175 (at 606). For example, depending on a particular plane identified by the detection module 400 as representing a haul truck 175, the detection module 400 defines a volume including the plane that marks an area around the haul truck 175 that the shovel 100 (e.g., the dipper 140) should not enter. For example,
Similarly, the detection module 400 can define a volume of exclusion for the far sidewall plane 706 and the near sidewall plane 710. For example, as illustrated in
In some embodiments, after the detection module 400 detects one or more planes, the detection module 400 can lock the planes. In this situation, the detection module 400 no longer attempts to detect or identify objects. However, the locked planes can be used to test the mitigation module 500 even with the detected object removed. For example, after a haul truck 175 is detected at a particular position, the haul truck 175 can be physically removed while the mitigation module 500 is tested to determine if the module 500 successfully augments control of the dipper 140 to avoid a collision with the truck 175 based on the locked position of the truck 175 previously detected by the detection module 400. In this regard, the functionality of the mitigation module 500 can be tested without risking damage to the shovel 100 or the haul truck 175 if the mitigation module 500 malfunctions.
Returning to
The planes and/or volumes of exclusions can be displayed in various ways. For example, in some embodiments, the user interface 370 superimposes the detected planes on a camera view of an area adjacent to the shovel 100. In particular, one or more still or video cameras including a wide-angle lens, such as a fisheye lens, can be mounted on the shovel 100 and can be used to capture an image of one or more areas around the shovel 100. For example,
The overhead view can also include a graphical representation 820 of the shovel 100 from an overhead view. In some embodiments, the representation 820 can be modified based on the current status of the shovel 100 (e.g., the current swing angle of the dipper 140). The planes and/or the volumes of exclusions determined by the detection module 400 can be superimposed on the overhead view of the shovel 100. For example, as illustrated in
The mitigation module 500 also uses the current position and direction of travel or movement of the shovel 100 to identify possible collisions between a portion of the shovel 100, such as the dipper 140, and a detected object (at 906). In some embodiments, the mitigation module identifies a possible collision based on whether the dipper 140 is headed toward and is currently positioned within a predetermined distance from a detected object or a volume of exclusive associated with the detected object. For example, the mitigation module 500 identifies a velocity vector of the dipper 140. In some embodiments, the velocity vector is associated with a ball pin of the dipper 140. In other embodiments, the module 500 identifies multiple velocity vectors, such as a vector for a plurality of outer points of the dipper 140. The mitigation module 500 can generate the one or more velocity vectors based on forward kinematics of the shovel 100. After generating the one or more velocity vectors, the module 500 performs geometric calculations to extend the velocity vectors infinitely and determine if any vector intersects any of the planes identified by the detection module 400 (see
If there is an intersection, the module 500 identifies that a collision is possible. When the mitigation module 500 determines that a collision is possible, the mitigation module 500 can generate one or more alerts (e.g., audio, visual, or haptic) and issue the alerts to the shovel operator. The mitigation module 500 can also optionally augment control of the shovel 100 to prevent a collision or reduce the impact speed of a collision with the detected object (at 908). In particular, the mitigation module 500 can apply a force field that slows the dipper 140 when it is too close to a detected object. The mitigation module 500 can also apply a velocity limit field that limits the speed of the dipper 140 when it is close to a detected object.
For example, the module 500 can generate a repulsive field at the point of the identified intersection. The repulsive field modifies the motion command generated through the user interface 370 based on operator input. In particular, the mitigation module 500 applies a repulsive force to a motion command to reduce the command. For example, the mitigation module 500 receives a motion command, uses the repulsive field to determine how much to reduce the command, and outputs a new, modified motion command. One or more controllers included in the shovel 100 receive the motion command, or a portion thereof, and operate one or more components of the shovel based on the motion command. For example, a controller that swings the handle 135 swing the handle 135 as instructed in the motion command.
It should be understood that because the velocity vectors are extended infinitely, an intersection may be identified even when the dipper 140 is a large distance from the detected object. The repulsive field applied by the mitigation module 500, however, may be associated with a maximum radius and a minimum radius. If the detected intersection is outside of the maximum radius, the mitigation module 500 does not augment control of the shovel 100 and, thus, no collision mitigation occurs.
The repulsive field applies an increasing negative factor to the motion command as the dipper 140 moves closer to a center of the repulsive field. For example, when the dipper 140 first moves within the maximum radius of the repulsive force, the repulsive force reduces the motion command by a small amount, such as approximately 1%. As the dipper 140 moves closer to the center of the repulsive field, the repulsive field reduces the motion command by a greater amount until the dipper 140 is within the minimum radius of the force, where the reduction is approximately 100% and the dipper 140 is stopped. In some embodiments, the repulsive field is only applied to motion of the dipper 140 toward the detected object. Therefore, an operator can still manually move the dipper 140 away from the detected object. In some situations, the dipper 140 may be repulsed by multiple repulsive fields (e.g., associated with multiple detected objects or planes of a detected object). The multiple repulsive fields prevent the dipper 140 from moving in multiple directions. However, in most situations, the dipper 140 will still be able to be manually moved in at least one direction that allows the dipper 140 to be moved away from the detected object.
Therefore, the mitigation module 500 can prevent collisions between the shovel 100 and other object or can mitigate the force of such collisions and the resulting impacts. When preventing or mitigating a collision (e.g., by limiting movement of the shovel or limiting speed of movement of the shovel), the mitigation module 500 can provide alerts to the operator using audible, visual, or haptic feedback (at 910). The alerts inform the operator that the augmented control is part of collision mitigation control as compared to a malfunction of the shovel 100 (e.g., non-responsiveness of the dipper 140).
In some embodiments, unlike other collision detection systems, the systems and methods described in the present application do not require modifications to the detected objects, such as the haul truck 175. In particular, in some arrangements, no sensors or devices and related communications links are required to be installed on and used with the haul truck 175 to provide information to the shovel 100 about the location of the haul truck 175. For example, in some existing systems, visual fiducials and other passive/active position sensing equipment (e.g., GPS devices) are mounted on haul trucks, and a shovel uses information from this equipment to track the location of a haul truck. Eliminating the need for such modifications reduces the complexity of the systems and methods and reduces the cost of haul trucks 175.
Similarly, some existing collision detection systems require that the system be preprogrammed with the characteristics (e.g., image, size, dimensions, colors, etc.) of all available haul trucks (e.g., all makes, models, etc.). The detection systems use these preprogrammed characteristics to identify haul trucks. This type of preprogramming, however, increases the complexity of the system and requires extensive and frequent updates to detect all available haul trucks when new trucks are available or there are modifications to existing haul trucks. In contrast, as described above, the detection module 400 uses planes to identify a haul. Using planes and a configuration of planes commonly associated with a haul truck increases the accuracy of the detection module 400 and eliminates the need for extensive preprogramming and associated updates. In addition, by detecting objects based on more than just one characteristic, such as size, the detection module 400 more accurately detects haul trucks. For example, using the plane configuration described above, the detection module 400 can distinguish between haul trucks and other pieces of equipment or other parts of an environment similar in size to a haul truck (e.g., large boulders).
It should be understood that although the above functionality is related to detecting and mitigating collisions between the shovel 100 (i.e., the dipper 140) and a haul truck 175, the same functionality can be used to detect and/or mitigate collisions between any component of the shovel 100 and any type of object. For example, the functionality can be used to detect and/or mitigate collisions between the tracks 105 and the dipper 140, between the tracks 105 and objects located around the shovel 100 such as boulders or people, between the counterweight at the rear of the shovel 100 and objects located behind the shovel 100, etc. Also, it should be understood that the functionality of the controller 300 as described in the present application can be combined with other controllers to perform additional functionality. In addition or alternatively, the functionality of the controller 300 can also be distributed among more than one controller. Also, in some embodiments, the controller 300 can be operated in various modes. For example, in one mode, the controller 300 may detect potential collisions but may not augment control of the dipper 140 (i.e., only operate the detection module 400). In this mode, the controller 300 may log information about detected objects and/or detected possible collisions with detected objects and/or may alert the operator of the objects and/or the possible collisions.
It should also be understood that although the functionality of the controller 300 is described above in terms of two modules (i.e., the detection module 400 and the mitigation module 500), the functionality can be distributed between the two modules in various configurations. Furthermore, in some embodiments, as illustrated in
Various features and advantages of the invention are set forth in the following claims.
The present application claims priority to U.S. Provisional Application No. 61/617,516, filed Mar. 29, 2012 and U.S. Provisional Application No. 61/763,229, filed Feb. 11, 2013, the entire contents of which are both incorporated by reference herein.
Number | Date | Country | |
---|---|---|---|
61617516 | Mar 2012 | US | |
61763229 | Feb 2013 | US |