The present disclosure relates generally to a detection method and, more particularly, to a method for detecting obstacles near a machine.
Large machines such as, for example, wheel loaders, off-highway haul trucks, excavators, motor graders, and other types of earth-moving machines are used to perform a variety of tasks. Some of these tasks involve intermittently moving between and stopping at certain locations within a worksite and, because of the poor visibility provided to operators of the machines, these tasks can be difficult to complete safely and effectively. Therefore, operators of the machines may additionally be provided with detections of obstacle sensors. But, individual obstacle sensors operate effectively (i.e. provide accurate detections) only within certain spatial regions. Outside of these regions, the obstacle sensors may provide inaccurate detections. For example, one obstacle sensor may detect an obstacle at a certain location, and another obstacle sensor may detect nothing at that same location, solely because of how each is mounted to the machine and aimed.
One way to minimize the effect of these contradictory detections is described in U.S. Pat. No. 6,055,042 (the '042 patent) issued to Sarangapani on Apr. 25, 2000. The '042 patent describes a method for detecting an obstacle in the path of a mobile machine. The method includes scanning with each of a plurality of obstacle sensor systems. The method also includes weighting the data scanned by each of the obstacle sensor systems based upon external parameters such as ambient light, size of the obstacle, or amount of reflected power received from the obstacle. Based on this weighted data, at least one characteristic of the obstacle is determined.
Although the method of the '042 patent may improve detection of an obstacle in the path of a mobile machine, it may be prohibitively expensive for certain applications. In particular, weighting the data scanned by the obstacle sensor systems may be unnecessary. Because this weighting may require information regarding external parameters, additional hardware may be required. And, this additional hardware may increase the costs of implementing the method.
The disclosed method and system are directed to overcoming one or more of the problems set forth above.
In one aspect, the present disclosure is directed to a method for detecting obstacles near a machine. The method includes pairing one-to-one each of a plurality of obstacle sensors to each of a plurality of non-overlapping confidence regions. Additionally, the method includes scanning with the plurality of obstacle sensors. The method also includes receiving from the plurality of obstacle sensors raw data regarding the scanning. In addition, the method includes assembling the raw data into a map. The method also includes determining at least one characteristic of at least one obstacle, based on the map.
In another aspect, the present disclosure is directed to a system for detecting obstacles near a machine. The system includes a plurality of obstacle sensors located on the machine. The system also includes a controller in communication with each of the plurality of obstacle sensors. The controller is configured to pair one-to-one each of the plurality of obstacle sensors to each of a plurality of non-overlapping confidence regions. Additionally, the controller is configured to scan with the plurality of obstacle sensors. The controller is also configured to receive from the plurality of obstacle sensors raw data regarding the scanning, and assemble the raw data into a map. Based on the map, the controller is configured to determine at least one characteristic of at least one obstacle.
Machine 10 may have an operator station 24, which may be situated to minimize the effect of blind spots (i.e. maximize the unobstructed area viewable by an operator of machine 10). But, because of the size of some machines, these blind spots may still be large. For example, dangerous obstacle 12 may reside completely within a blind spot 28 of machine 10. To avoid collisions with obstacle 12, machine 10 may be equipped with an obstacle detection system 30 (referring to
Obstacle detection system 30 may include an obstacle sensor 32, or a plurality thereof to detect points E on surfaces within blind spot 28. For example, obstacle detection system 30 may include a first obstacle sensor 32a and a second obstacle sensor 32b. Obstacle sensor 32a may detect points E1 that are on surfaces facing it (i.e. points E within a line of sight of obstacle sensor 32a). And, obstacle sensor 32b may detect points E2 that are on surfaces facing it (i.e. points E within a line of sight of obstacle sensor 32b). Detections of points E1 and E2 may be raw (i.e. not directly comparable). Therefore, as illustrated in
Controller 34 may be associated with operator station 24 (referring to
Map 36, electronic in form, may be stored in the memory of controller 34, and may be updated in real time to reflect the locations of transformed points E1 and E2. As illustrated in
As previously discussed, detections of points E1 and E2 by obstacle sensors 32a and 32b, respectively, may be raw. In particular, these detections may be raw because sensors 32a and 32b may or may not be fixedly located at a shared location with respect to coordinate system T. For example, it is contemplated that obstacle sensors 32a and 32b may both be attached to a quarter panel 39 of machine 10, but obstacle sensor 32a may be located at a point OSa and obstacle sensor 32b may be located at a point OSb. Therefore, locations of points E1 may be detected with respect to a coordinate system Sa, with an origin at point OSa, and locations of points E2 may be detected with respect to a coordinate system Sb, with an origin at point OSb.
Coordinate system Sa may be a right-handed 3-D cartesian coordinate system having axis vectors xSa, ySa, and zSa. A point in coordinate system Sa may be referenced by its spatial coordinates in the cartesian form XSa=[sa1 sa2 sa3], where from point OSa, sa1 is the distance along axis vector xSa, sa2 is the distance along axis vector ySa, and sa3 is the distance along axis vector zSa. The geographical location of point OSa and the orientation of coordinate system Sa relative to coordinate system T may be fixed and known. In particular, XT(OSa) may equal [−bSa1 −bSa2 −bSa3], and AT(Sa) may equal [psa ysa rsa]. A point in coordinate system Sa may alternatively be referenced by its spatial coordinates in the polar form XSaP=[ρa θa φa], where ρa is the distance from point OSa, θa is the polar angle from axis vector xSa, and φa is the zenith angle from axis vector zSa.
Coordinate system Sb may be a right-handed 3-D cartesian coordinate system having axis vectors xSb, ySb, and zSb. A point in coordinate system Sb may be referenced by its spatial coordinates in the cartesian form XSb=[sb1 sb2 sb3], where from point OSb, sb1 is the distance along axis vector xSb, sb2 is the distance along axis vector ySb, and sb3 is the distance along axis vector zSb. The geographical location of point OSb and the orientation of coordinate system Sb relative to coordinate system T may also be fixed and known. In particular, XT(OSb) may equal [−bSb1 −bSb2−bSb3], and AT (Sb) may equal [psb ysb rsb]. A point in coordinate system Sb may alternatively be referenced by its spatial coordinates in the polar form XSbP=[ρb θb φb], where ρb is the distance from point OSb, θb is the polar angle from axis vector xSb, and φb is the zenith angle from axis vector zSb.
Each obstacle sensor 32 may embody a LIDAR (light detection and ranging) device, a RADAR, (radio detection and ranging) device, a SONAR (sound navigation and ranging) device, a vision based sensing device, or another type of device that may detect a range and a direction to points E. For example, as detected by obstacle sensor 32a, the range to point E1 may be represented by spatial coordinate ρa and the direction to point E1 may be represented by the combination of spatial coordinates θa and φa. And, as detected by obstacle sensor 32b, the range to point E2 may be represented by spatial coordinate ρb and the direction to point E2 may be represented by the combination of spatial coordinates θb and φb.
As illustrated in
Some of the detections within over-detected region 42 may be inaccurate due to reflections or other unknown interferences. For example, detections of points E1 within over-detected region 42a (shown by double crosshatching in
The disclosed system may be applicable to machines, which may intermittently move between and stop at certain locations within a worksite. The system may determine a characteristic of an obstacle near one of the machines. In particular, the system may detect and analyze surface points to determine the size and location of the obstacle. Operation of the system will now be described.
As illustrated in
The pairing of step 100 may be based on the location and orientation of obstacle sensors 32a and 32b. Since the pairing is one-to-one, controller 34 may use it to resolve conflicting obstacle detections from sensor 32a and 32b. For example, obstacle sensor 32a may be paired with confidence region 44a, which may include the volume bounded by detection region 40a (referring to
Before or after step 100, each obstacle sensor 32 may scan its associated detection region 42 (step 110). As previously discussed, each obstacle sensor 32 may detect the range and direction from itself to points E. It is contemplated that these detections may occur concurrently (i.e. parallelly). For example, obstacle sensor 32a may detect the range and direction from itself to points E1 (step 110a). And, obstacle sensor 32b may detect the range and direction from itself to points E2 (step 110b).
Each of obstacle sensors 32a and 32b may then simultaneously communicate to controller 34 several points E1 (step 120a) and several points E2 (step 120b), respectively. For example, obstacle sensor 32a communications may include the locations of n points E1 in coordinate system Sa in polar form:
each row representing one point. And, obstacle sensor 32b communications may include the locations of n points E2 in coordinate system Sb in polar form:
each row representing one point.
Next, controller 34 may assemble the raw locations of points E into map 36 (step 130). This assembly may include sub-steps. In particular, step 130 may include the sub-step of transforming the received locations of points E into coordinate system T (sub-step 150). Step 130 may also include the sub-step of applying a confidence filter to points E (sub-step 160). Additionally, step 130 may include unionizing points E received from each obstacle sensor 32 (sub-step 170).
Transforming the received locations of points E into coordinate system T (sub-step 150) may also include sub-steps. These sub-steps may be specific to each obstacle sensor, and may again be performed concurrently. For example, controller 34 may relate points E1 in coordinate system Sa to their locations in coordinate system T. In particular, controller 34 may first relate points E1 in coordinate system Sa in polar form to their locations in coordinate system Sa in cartesian form (sub-step 180a). The relation between coordinate system Sa in polar form (i.e. XSaP) and coordinate system Sa in cartesian form (i.e. XSa) may be as follows:
where each row represents one point.
Next, controller 34 may relate points E1 in coordinate system Sa in cartesian form to their locations in coordinate system T (sub-step 190a). The relation between coordinate system Sa in cartesian form and coordinate system T may be as follows:
XSa, is the first row of XSa, XSa2 is the second row of XSa, and XSan is the nth row of XSa; ASa=AysaApsaArsa, and represents the rotational transform from coordinate system Sa in cartesian form to coordinate system T, where:
and represents the translational transform from coordinate system Sa in Cartesian form to coordinate system T.
Similarly, controller 34 may relate points E2 in coordinate system Sb to their locations in coordinate system T. In particular, controller 34 may first relate points E2 in coordinate system Sb in polar form to their locations in coordinate system Sb in cartesian form (sub-step 180b). The relation between coordinate system Sb in polar form (i.e. XSbP) and coordinate system Sb in cartesian form (i.e. XSb) may be as follows:
where each row represents one point.
Next, controller 34 may relate points E2 in coordinate system Sb in cartesian form to their locations in coordinate system T (sub-step 190b). The relation between coordinate system Sb in cartesian form and coordinate system T may be as follows:
XSb1 is the first row of Xsb, XSb2 is the second row of Xsb, and Xsbn is the nth row of XSb;
ASb=AysbApsbArsb, and represents the rotational transform from coordinate system Sb in cartesian form to coordinate system T, where:
and represents the translational transform from coordinate system Sb in cartesian form to coordinate system T.
The application of a confidence filter to points E (sub-step 160) may be performed before or after step 150, and may be based upon the pairings of step 100. In particular, the received locations of points E1 may be filtered so as to retain only those points E1 within confidence region 44a (sub-step 160a). And, the received locations of points E2 may be filtered so as to retain only those points E2 within confidence region 44b (sub-step 160b). These filterings may occur concurrently, and serve to resolve conflicts between obstacle sensor 32a and 32b detections (i.e. where a conflict exists, a detection by only one obstacle sensor 32 will be retained).
After completing sub-steps 150 and 160, controller 34 may unionize transformed remaining points E1 and E2 (hereafter “points U”). Specifically, controller 34 may delete all points stored in map 36, and then incorporate points U into map 36. It is contemplated that by this deletion and incorporation map 36 may be kept up-to-date (i.e. only the most recent detections will be stored in map 36). It is further contemplated that controller 34 may lock map 36 after incorporating points U, thereby preventing the newly stored points U from being deleted before controller 34 determines a characteristic of an obstacle 12 (step 140).
After completing step 130, controller 34 may proceed to step 140, which may include sub-steps. In particular, step 140 may include the sub-step of applying a height filter to points U (sub-step 200). Step 140 may also include the sub-step of converting points U into obstacles 12 through blob extraction (sub-step 210). Additionally, step 140 may include the sub-step of applying a size filter to obstacles 12, thereby determining a characteristic (i.e. the size) of obstacles 12 (sub-step 220).
Controller 34 may apply a height filter to points U to filter out ground surface 37 (referring to
Next, controller 34 may convert points U into obstacles 12 through blob extraction (sub-step 210). Blob extraction is well known in the art of computer graphics. Obstacles are found by clustering similar points into groups, called blobs. In particular, blob extraction works by clustering adjacent points U (indicating an obstacle 12 is present) together and treating them as a unit. Two points U are adjacent if they have either: (1) equivalent spatial coordinates t1 and consecutive spatial coordinates t2; (2) equivalent spatial coordinates t1 and consecutive spatial coordinates t3; (3) equivalent spatial coordinates t2 and consecutive spatial coordinates t1; (4) equivalent spatial coordinates t2 and consecutive spatial coordinates t3; (5) equivalent spatial coordinates t3 and consecutive spatial coordinates t1; or (6) equivalent spatial coordinates t3 and consecutive spatial coordinates t2. By converting points U into obstacles 12, obstacles 12 can be treated as individual units that are suitable for further processing.
Controller 34 may then apply a size filter to obstacles 12 (sub-step 220). Specifically, controller 34 may filter out obstacles 12 that do not have at least one of height 16, width 18, and depth 20 longer than length 22 (referring to
It is contemplated that after step 140, operation of the disclosed system may vary according to application. Since obstacles 12 may be dangerous, it is contemplated that the disclosed system may be incorporated into a vehicle collision avoidance system, which may warn an operator of machine 10 of dangerous obstacles 12. This incorporation may be simple and cost effective because the disclosed system need not have access to information regarding external parameters. In particular, it need not include hardware for gathering information regarding these external parameters. Alternatively, it is contemplated that the disclosed system may be incorporated into a security system. This incorporation may also be cost effective because the disclosed system may be configured with detection regions only in high threat areas such as, for example, windows and doors.
It will be apparent to those skilled in the art that various modifications and variations can be made to the method and system of the present disclosure. Other embodiments of the method and system will be apparent to those skilled in the art from consideration of the specification and practice of the method and system disclosed herein. It is intended that the specification and examples be considered as exemplary only, with a true scope of the disclosure being indicated by the following claims and their equivalents.