The present invention relates to autonomous navigation, and in particular such navigation for farm and construction or off-road equipment and calibration of sensors used for navigation and obstacle detection.
Autonomous navigation for vehicles requires sensors that can detect the location of objects relative to the vehicle. The vehicle can have a localization sensor, such as a Global Navigation Satellite System (GNSS). The sensors, such as radar, camera and lidar are mounted at different locations on the vehicle, and their location relative to the localization sensor needs to be determined. Some vehicles, such as farm equipment, operate in rough environments with a lot of vibration, or cluttered environments, where sensors may run into objects such as trees that can affect the mounting of the sensor and the direction it is pointing. In addition, during manufacturing and/or assembly, the sensor may not end up pointed in exactly the desired direction.
It would be desirable to have a calibration system to adjust for any movement of the sensor and also determine the sensor's location relative to a localization sensor. As such, new systems, methods, and other techniques are needed to address these and other issues.
Unless otherwise indicated herein, the materials described in this section of the Specification are not prior art to the claims in this application and are not admitted to be prior art by inclusion in this section.
In embodiments, a method and apparatus are provided for calibrating a sensor or suite of sensors. An object is placed in a field of view of the sensor at a known location. The object is moved in a known manner. Data from the sensor is examined to detect an object image moving in the known manner. An apparent location of the object is determined from the sensor output. The apparent location is compared to the known location of the object to determine an offset. The offset is stored and used to adjust future detected locations of objects.
In embodiments, the object is part of a calibration system and the sensor is mounted on a vehicle. The known location of the object is determined by communication between the calibration system and a localization sensor (e.g., a GNSS sensor) mounted on the vehicle. The calibration system with the object is moved to multiple known locations for the determination of multiple offsets at each of the multiple known locations.
An advantage of embodiments of the invention is that calibration can easily be done in the factory, at a showroom or in the field. The calibration can be repeated over time as needed.
In embodiments, the object is moved at one or more frequencies, such as by rotation. The sensor is a radar sensor in one embodiment. The object is detected by filtering at one or more frequencies, which includes running a Fourier Transform on a sequence of frames from the sensor to detect peaks at the known frequencies. The object is chosen as being a radar reflector (e.g., a cube) with high returned energy in the frequency band of the sensor. The object in one embodiment is mounted on a calibration apparatus and is rotated.
This summary is not intended to identify key or essential features of the claimed subject matter, nor is it intended to be used in isolation to determine the scope of the claimed subject matter. The subject matter should be understood by reference to appropriate portions of the entire specification of this disclosure, any or all drawings, and each claim.
The foregoing, together with other features and examples, will be described in more detail below in the following specification, claims, and accompanying drawings.
The terms and expressions that have been employed are used as terms of description and not of limitation, and there is no intention in the use of such terms and expressions of excluding any equivalents of the features shown and described or portions thereof. It is recognized, however, that various modifications are possible within the scope of the systems and methods claimed. Thus, although the present system and methods have been specifically disclosed by examples and optional features, modification and variation of the concepts herein disclosed should be recognized by those skilled in the art, and that such modifications and variations are considered to be within the scope of the systems and methods as defined by the appended claims.
The accompanying drawings, which are included to provide a further understanding of the disclosure, are incorporated in and constitute a part of this specification, illustrate embodiments of the disclosure and together with the detailed description serve to explain the principles of the disclosure. No attempt is made to show structural details of the disclosure in more detail than may be necessary for a fundamental understanding of the disclosure and various ways in which it may be practiced.
Disclosed are techniques for calibrating sensors on a vehicle, in particular an off-road vehicle.
In some embodiments, additional sensors 112, 114 and 116 are provided. Any number of sensors could be provided. The sensors can be a mix of sensors, including radar, lidar and camera sensors. Each sensor could potentially be misaligned, and they could be misaligned with respect to each other.
In embodiments, a GNSS 118 or other localization sensor is mounted on vehicle 104. In order to relate what the other sensors see to the GNSS provided location, the location of the other sensors on vehicle 104 relative to GNSS 118 needs to be determined. GNSS 118 has an x, y coordinate system (120, 122) based on its mounting position, which is offset from the X, Y coordinate system 110, 111. Using a GNSS or other localization device on object 108, its position relative to GNSS sensor 118 is determined. Each sensor's location relative to object 108 is then determined. The offset in positions relative to the GNSS device 118 is then calculated. Embodiments thus provide not only a calibration method for determining the direction a sensor is pointing, but also an automatic calibration of the location of each sensor relative to the GNSS device, which is the designated vehicle frame location for the vehicle.
Calibrating a radar sensor can be particularly problematic. Radar sensors have a high degree of noise which makes it difficult to filter. It can thus be difficult to distinguish between objects in a radar return signal. Normally in these returns, there is information about SNR (signal-to-noise ratio) and RCS (radar cross section).
In addition to selecting a calibration object with a high RCS (radar cross section), the object is made more detectable relative to other detected objects by moving it in a known manner. The processor that processes the radar signal then looks for anything moving in that known manner. In one embodiment, the object is simply rotated at a known frequency. The unique multiple tetrahedron shape is ideal for continually providing a known RCS response signal as the object is rotated, since multiple different tetrahedrons are visible as the object rotates. This provides the highest strength return when the corner reflector is facing the detector. It then decreases as the angle changes. This can be represented by a bell curve where the peak of the curve is when the reflector is perfectly pointing back to the detector. With the spinning motion, at the known frequency, the RCS response will be a sinusoidal curve at the same frequency.
The corner reflector 202 of
In alternate embodiments, other known movements could be used. For example, the object can be moved at a known speed, and/or known acceleration, and/or in a known direction. In some embodiments different combinations could be used, such as moving the object in a known direction at a known acceleration, or moving it in a known direction while rotating at a known frequency.
Other embodiments could vary the structure of
An IMU or accelerometer can be in the GNSS unit or separate. The measured tilt allows the angle of pole 306 to be determined. This can be subtracted from a total angle detected, with the remaining angle indicating a mounting angle of the sensor. A camera image of the April tag or other camera target can alternately be used to determine the angle of pole 306 (or other device). The distortion of the April tag will indicate the amount of tile. Essentially, the detected April tag image can be tilted in the computer until it matches the expected vertical April tag. The amount the image had to be tilted to match the stored, expected April tag pattern indicates the amount of tilt of pole 306.
In one embodiment, when the location of the calibration apparatus is detected using GNSS or another localization method, that location is conveyed to a processor analyzing the sensor data. An offset is applied for the position on the calibration apparatus, relative to the GNSS, of the target being searched for—a corner reflector for radar, a lidar reflector for lidar, or a target image for a camera. A bounding box around that location is constructed to limit the search area for the target. This limits the amount of processing needed and speeds up localization of the target. If the calibration target is not found, the size of the bounding box is increased until the calibration target is found. In an alternate embodiment, the bounding box can simply be placed around the height of the pole, throughout the width of the sensor image.
In embodiments, the radar corner reflector is made of stainless steel, or another material that is highly visible to radar. Reflective tape of different alternating colors can be placed over the corner detector. The reflective tape will be highly visible to a lidar sensor. Also, the changing colors of the reflective tape as the corner reflector is rotated will be easily detectable by a camera sensor, and can be used instead of an April tag. In one embodiment, alternating red and white reflective tape is used.
In embodiments, the position of each sensor relative to the localization sensor 118 is determined. From that data, the position of each sensor relative to each of the other sensors can be determined. The offset will include a combined effect due to the different location of the sensor compared to the GNSS and any misaligned angle of the sensor. The offset thus has both an X, Y, and Z distance component (from the GNSS) and an angle component.
In embodiments, multiple templates could be used for different areas of the FOV. By taking multiple measurements at different positions in the field of view, all positions can be determined using interpolation from the measured positions. Thus, an X, Y, Z offset can be determined for each position. This X, Y, Z offset combines the effects of the physical X, Y, Z distance between the radar sensor and the GNSS on the vehicle, and the effects of the angle at which the radar sensor is pointing.
In embodiments, the corner reflector ranges in size from less than an inch in diameter to 6 or 12 inches in diameter. The size of the corner reflector will determine the strength of the SNR and RCS return signal.
Returning to
In embodiments, object 108 is moved to different positions, and the calibration comparison is repeated. Alternately, multiple objects are rotated and detected at the same time. If the offsets vary at different positions, the offsets are averaged or otherwise combined to provide the offset used for calibration.
In some embodiments, a lidar sensor is used in addition to a radar sensor. Lidar can be more accurate, while radar can better penetrate fog, dust and other obstructions. The lidar sensor can be similarly calibrated using a reflective target highly visible to lidar. If the lidar and radar targets are combined, the lidar can help more accurately detect the radar object and provide a more accurate offset for the radar sensor. The data from the lidar and radar sensors can be correlated. In addition, there can be a correlation among multiple ones of the same type of sensor (e.g. multiple radar or multiple lidars or multiple cameras), in addition to the correlation between different sensor modalities.
In one alternate embodiment, the rotation of radar reflector 202 is stopped for the lidar measurement. That ensures that the reflector is at a fixed position that does not move. The position of the lidar reflector 602 can be known from a measurement with respect to the lidar sensor. Since the lidar sensor can be mounted at a different location on a vehicle than the radar sensor, a known offset between the lidar and radar sensor can be subtracted from the radar offset.
Processor 804 performs the calibration calculations described above, based on input from a user of the known target position as measured by hand, or by the GNSS calculations between the vehicle GNSS and the calibration apparatus GNSS. The finally calculated offset, or multiple offsets, is stored in data memory 813 and used in subsequent obstacle detection operations, which determine the control signals to speed controller 814 and steering controller 816.
In some embodiments, a communications module 818 provides communications over the internet 820 and/or other networks, to a remote computer 822. Remote computer 822 has its own communications module 824, processor 826, data memory 828 and program memory 830. Data memory 828 can store the offset as a backup. Data memory 828 can also store, in connection with each offset, the characteristics of each type of vehicle, including performance data depending upon conditions. Offset data from multiple vehicles of the same type and operating under the same conditions can be reviewed to isolate and fix problems. In one embodiment, if there is sufficient consistency in the offsets, vehicles without calibration targets being used can be calibrated in the same manner as the average of vehicles that are calibrated, on the assumption that they would have a similar offset due to factory installation or operating under similar conditions on the same type of equipment. Also, the locations of the sensors on other vehicles with the same arrangement of sensors can be used to set bounding boxes for each sensor to search for the calibration object, and can be downloaded to data memory 813.
Computer system 1000 is shown comprising hardware elements that can be electrically coupled via a bus 1005, or may otherwise be in communication, as appropriate. The hardware elements may include one or more processors 1010, including without limitation one or more general-purpose processors and/or one or more special-purpose processors such as digital signal processing chips, graphics acceleration processors, and/or the like; one or more input devices 1015, which can include, without limitation a mouse, a keyboard, a camera, and/or the like; and one or more output devices 1020, which can include, without limitation a display device, a printer, and/or the like.
Computer system 1000 may further include and/or be in communication with one or more non-transitory storage devices 1025, which can comprise, without limitation, local and/or network accessible storage, and/or can include, without limitation, a disk drive, a drive array, an optical storage device, a solid-state storage device, such as a random access memory (“RAM”), and/or a read-only memory (“ROM”), which can be programmable, flash-updateable, and/or the like. Such storage devices may be configured to implement any appropriate data stores, including without limitation, various file systems, database structures, and/or the like.
Computer system 1000 might also include a communications subsystem 1030, which can include, without limitation a modem, a network card (wireless or wired), an infrared communication device, a wireless communication device, and/or a chipset such as a Bluetooth® device, an 802.11 device, a Wi-Fi device, a WiMax device, cellular communication facilities, etc., and/or the like. The communications subsystem 1030 may include one or more input and/or output communication interfaces to permit data to be exchanged with a network such as the network described below to name one example, to other computer systems, and/or any other devices described herein. Depending on the desired functionality and/or other implementation concerns, a portable electronic device or similar device may communicate image and/or other information via the communications subsystem 1030. In other embodiments, a portable electronic device may be incorporated into computer system 1000, e.g., an electronic device as an input device 1015. In some embodiments, computer system 1000 will further comprise a working memory 1035, which can include a RAM or ROM device, as described above.
Computer system 1000 also can include software elements, shown as being currently located within the working memory 1035, including an operating system 1040, device drivers, executable libraries, and/or other code, such as one or more application programs 1045, which may comprise computer programs provided by various embodiments, and/or may be designed to implement methods, and/or configure systems, provided by other embodiments, as described herein. Merely by way of example, one or more procedures described with respect to the methods discussed above can be implemented as code and/or instructions executable by a computer and/or a processor within a computer; in an aspect, then, such code and/or instructions can be used to configure and/or adapt a general-purpose computer or other device to perform one or more operations in accordance with the described methods.
A set of these instructions and/or code may be stored on a non-transitory computer-readable storage medium, such as the storage device(s) 1025 described above. In some cases, the storage medium might be incorporated within a computer system, such as computer system 1000. In other embodiments, the storage medium might be separate from a computer system e.g., a removable medium, such as a compact disc, and/or provided in an installation package, such that the storage medium can be used to program, configure, and/or adapt a general purpose computer with the instructions/code stored thereon. These instructions might take the form of executable code, which is executable by computer system 1000 and/or might take the form of source and/or installable code, which, upon compilation and/or installation on computer system 1000 e.g., using any of a variety of generally available compilers, installation programs, compression/decompression utilities, etc., then takes the form of executable code.
It will be apparent to those skilled in the art that substantial variations may be made in accordance with specific requirements. For example, customized hardware might also be used, and/or particular elements might be implemented in hardware or software including portable software, such as applets, etc., or both. Further, connection to other computing devices such as network input/output devices may be employed.
As mentioned above, in one aspect, some embodiments may employ a computer system such as computer system 1000 to perform methods in accordance with various embodiments of the technology. According to a set of embodiments, some or all of the procedures of such methods are performed by computer system 1000 in response to processor 1010 executing one or more sequences of one or more instructions, which might be incorporated into the operating system 1040 and/or other code, such as an application program 1045, contained in the working memory 1035. Such instructions may be read into the working memory 1035 from another computer-readable medium, such as one or more of the storage device(s) 1025. Merely by way of example, execution of the sequences of instructions contained in the working memory 1035 might cause the processor(s) 1010 to perform one or more procedures of the methods described herein. Additionally, or alternatively, portions of the methods described herein may be executed through specialized hardware.
The terms “machine-readable medium” and “computer-readable medium,” as used herein, refer to any medium that participates in providing data that causes a machine to operate in a specific fashion. In an embodiment implemented using computer system 1000, various computer-readable media might be involved in providing instructions/code to processor(s) 1010 for execution and/or might be used to store and/or carry such instructions/code. In many implementations, a computer-readable medium is a physical and/or tangible storage medium. Such a medium may take the form of a non-volatile media or volatile media. Non-volatile media include, for example, optical and/or magnetic disks, such as the storage device(s) 1025. Volatile media include, without limitation, dynamic memory, such as the working memory 1035.
Common forms of physical and/or tangible computer-readable media include, for example, a floppy disk, a flexible disk, hard disk, magnetic tape, or any other magnetic medium, a CD-ROM, any other optical medium, punch cards, paper tape, any other physical medium with patterns of holes, a RAM, a PROM, EPROM, a FLASH-EPROM, any other memory chip or cartridge, or any other medium from which a computer can read instructions and/or code.
Various forms of computer-readable media may be involved in carrying one or more sequences of one or more instructions to the processor(s) 1010 for execution. Merely by way of example, the instructions may initially be carried on a magnetic disk and/or optical disc of a remote computer. A remote computer might load the instructions into its dynamic memory and send the instructions as signals over a transmission medium to be received and/or executed by computer system 1000.
The communications subsystem 1030 and/or components thereof generally will receive signals, and the bus 1005 then might carry the signals and/or the data, instructions, etc. carried by the signals to the working memory 1035, from which the processor(s) 1010 retrieves and executes the instructions. The instructions received by the working memory 1035 may optionally be stored on a non-transitory storage device 1025 either before or after execution by the processor(s) 1010.
The methods, systems, and devices discussed above are examples. Various configurations may omit, substitute, or add various procedures or components as appropriate. For instance, in alternative configurations, the methods may be performed in an order different from that described, and/or various stages may be added, omitted, and/or combined. Also, features described with respect to certain configurations may be combined in various other configurations. Different aspects and elements of the configurations may be combined in a similar manner. Also, technology evolves and, thus, many of the elements are examples and do not limit the scope of the disclosure or claims.
Specific details are given in the description to provide a thorough understanding of exemplary configurations including implementations. However, configurations may be practiced without these specific details. For example, well-known circuits, processes, algorithms, structures, and techniques have been shown without unnecessary detail in order to avoid obscuring the configurations. This description provides example configurations only, and does not limit the scope, applicability, or configurations of the claims. Rather, the preceding description of the configurations will provide those skilled in the art with an enabling description for implementing described techniques. Various changes may be made in the function and arrangement of elements without departing from the spirit or scope of the disclosure.
Also, configurations may be described as a process which is depicted as a schematic flowchart or block diagram. Although each may describe the operations as a sequential process, many of the operations can be performed in parallel or concurrently. In addition, the order of the operations may be rearranged. A process may have additional steps not included in the figure. Furthermore, examples of the methods may be implemented by hardware, software, firmware, middleware, microcode, hardware description languages, or any combination thereof. When implemented in software, firmware, middleware, or microcode, the program code or code segments to perform the necessary tasks may be stored in a non-transitory computer-readable medium such as a storage medium. Processors may perform the described tasks.
Having described several example configurations, various modifications, alternative constructions, and equivalents may be used without departing from the spirit of the disclosure. For example, the above elements may be components of a larger system, wherein other rules may take precedence over or otherwise modify the application of the technology. Also, a number of steps may be undertaken before, during, or after the above elements are considered. Accordingly, the above description does not bind the scope of the claims.
As used herein and in the appended claims, the singular forms “a”, “an”, and “the” include plural references unless the context clearly dictates otherwise. Thus, for example, reference to “a user” includes a plurality of such users, and reference to “the processor” includes reference to one or more processors and equivalents thereof known to those skilled in the art, and so forth.
Also, the words “comprise”, “comprising”, “contains”, “containing”, “include”, “including”, and “includes”, when used in this specification and in the following claims, are intended to specify the presence of stated features, integers, components, or steps, but they do not preclude the presence or addition of one or more other features, integers, components, steps, acts, or groups.