The subject disclosure relates to active targets for automatic optical sensor alignment.
Vehicles (e.g., automobiles, trucks, construction equipment, farm equipment, automated factory equipment) increasingly include sensors to obtain information about the sensor and its environment. Exemplary sensors include radio detection and ranging (radar) systems, light detection and ranging (lidar) systems, and cameras. The sensors can be located anywhere within or on the vehicle. Thus, information obtained at a sensor may not necessarily indicate the information relative to the orientation and position of the vehicle. Accordingly, it is desirable to provide active targets for automatic optical sensor alignment.
In one exemplary embodiment, a method of performing automatic alignment of an optical sensor of a vehicle includes disposing two or more active targets at known locations in an alignment station. Each of the two or more active targets have at least two visibly different states. The method also includes coding a change among the at least two visibly different states for the two or more active targets. Images obtained by the optical sensor of the two or more active targets are processed to identify features and perform the alignment of the optical sensor with another sensor of the vehicle or with the vehicle.
In addition to one or more of the features described herein, the disposing the two or more active targets in the alignment station is based on a position of the optical sensor of the vehicle along a path past the two or more active targets in the alignment station.
In addition to one or more of the features described herein, the disposing the two or more active targets in the alignment station is based on a location of one or more occlusions blocking a view of the two or more active targets from the optical sensor.
In addition to one or more of the features described herein, the coding includes spatial coding of the two or more active targets based on relative position of the active targets.
In addition to one or more of the features described herein, the coding includes defining a pattern of the change among the at least two visibly different states among the two or more active targets such that the pattern facilitates identification of the two or more active targets.
In addition to one or more of the features described herein, the coding includes defining a duty cycle of each of the two or more active targets over a number of frame durations of the optical sensor, wherein the defining the duty cycle includes decreasing the duty cycle over the number of frame durations.
In addition to one or more of the features described herein, the coding includes defining a different frequency for the change among the at least two visibly different states for different ones of the two or more active targets.
In addition to one or more of the features described herein, the coding includes defining a different pattern of the change among the at least two visibly different states for different ones of the two or more active targets.
In addition to one or more of the features described herein, the defining the different pattern of the change among the at least two visibly different states includes conveying an identify of the different ones of the two or more active targets based on the pattern of the change among the at least two visibly different states or conveying a location of the different ones of the two or more active targets based on the pattern of the change among the at least two visibly different states.
In addition to one or more of the features described herein, the method also includes disposing one or more passive targets at known locations in the alignment station, each of the one or more passive targets having a single visible state.
In another exemplary embodiment, a system to perform automatic alignment of an optical sensor of a vehicle includes two or more active targets positioned at known locations in an alignment station. Each of the two or more active targets have at least two visibly different states. The system also includes a controller to code a change among the at least two visibly different states for the two or more active targets. Images obtained by the optical sensor of the two or more active targets are processed to identify features and perform the alignment of the optical sensor with another sensor of the vehicle or with the vehicle.
In addition to one or more of the features described herein, a position of the two or more active targets in the alignment station is based on a position of the optical sensor of the vehicle along a path past the two or more active targets in the alignment station.
In addition to one or more of the features described herein, a position of the two or more active targets in the alignment station is based on a location of one or more occlusions blocking a view of the two or more active targets from the optical sensor.
In addition to one or more of the features described herein, the two or more active targets are positioned by spatial coding based on relative position of the active targets.
In addition to one or more of the features described herein, the controller codes by defining a pattern of the change among the at least two visibly different states among the two or more active targets such that the pattern facilitates identification of the two or more active targets.
In addition to one or more of the features described herein, the controller codes by defining a duty cycle of each of the two or more active targets over a number of frame durations of the optical sensor, wherein the defining the duty cycle includes decreasing the duty cycle over the number of frame durations.
In addition to one or more of the features described herein, the controller codes by defining a different frequency for the change among the at least two visibly different states for different ones of the two or more active targets.
In addition to one or more of the features described herein, the controller codes by defining a different pattern of the change among the at least two visibly different states for different ones of the two or more active targets.
In addition to one or more of the features described herein, the controller defines the different pattern of the change among the at least two visibly different states to convey an identify of the different ones of the two or more active targets based on the pattern of the change among the at least two visibly different states or to convey a location of the different ones of the two or more active targets based on the pattern of the change among the at least two visibly different states.
In addition to one or more of the features described herein, the system also includes one or more passive targets disposed at known locations in the alignment station, each of the one or more passive targets having a single visible state.
The above features and advantages, and other features and advantages of the disclosure are readily apparent from the following detailed description when taken in connection with the accompanying drawings.
Other features, advantages and details appear, by way of example only, in the following detailed description, the detailed description referring to the drawings in which:
The following description is merely exemplary in nature and is not intended to limit the present disclosure, its application or uses. It should be understood that throughout the drawings, corresponding reference numerals indicate like or corresponding parts and features.
As previously noted, different sensors may be included in a vehicle to obtain information about the vehicle and its surroundings. This information may facilitate semi-autonomous or autonomous operation of the vehicle, for example. Among the exemplary sensors that may be available in a vehicle, a camera and a lidar system are optical sensors. As also previously noted, information obtained at a sensor, in a sensor coordinate system, must be transformed to provide the information relative to the vehicle, in the vehicle coordinate system, in order to use the information to control vehicle operation and to ensure that all information from all sensors is in the same coordinate system.
Embodiments of the systems and methods detailed herein pertain to using active targets for automatic optical sensor alignment. Sensor alignment refers to determining sensor position and orientation relative to the vehicle coordinate system. In some cases (e.g., for sensor fusion), sensor alignment may also refer to determining the position and orientation of a sensor relative to the coordinate system of another sensor. Sensor alignment may be performed as part of the manufacturing process, for example (e.g., during calibration or design validation and testing). According to one or more embodiments, optical sensor alignment is performed automatically using active targets in addition to the conventionally used passive targets. In particular, active targets are coded, as described herein, and used along with passive targets to facilitate automatic alignment of optical sensors like cameras and lidar systems with each other or with the vehicle.
In accordance with an exemplary embodiment,
A controller 130 may use information from one or more of the sensors 110, 120, 140 to control aspects of the operation of the vehicle 100. The controller 130 may perform aspects of the alignment process discussed herein alone or in conjunction with controllers within the sensors 110, 120, 140. The controller 130 and any controllers within the sensors 110, 120, 140 include processing circuitry that may include an application specific integrated circuit (ASIC), an electronic circuit, a processor (shared, dedicated, or group) and memory that executes one or more software or firmware programs, a combinational logic circuit, and/or other suitable components that provide the described functionality.
The coordinate system of the vehicle 100 is shown in
Known alignment techniques rely on features identified in images obtained by the optical sensors 115. In a simple case, for example, if the location of a feature in an image is known relative to the coordinate system of the vehicle 100, then alignment using the position and orientation of the feature in the image to the known location of the feature in the coordinate system of the vehicle 100 is a straight-forward process. More advanced alignment techniques are also known for scenarios in which the position of the vehicle 100 is not known (e.g., the vehicle 100 is moving), for example. The identification and location of features in the images is necessary to the known alignment techniques. As further discussed with reference to
The active targets 210 may be coded according to their location (i.e., spatial coding) or their visible states in a number of ways to enhance the feature detection that is used for alignment. Spatial coding of the active targets 210 may be used alone or in conjunction with other coding. Particular active targets 210 are positioned in an identifiable way as further discussed with reference to
The controller 230 may communicate with each active target 210 over wires (not shown) or wirelessly, as indicated in the exemplary case. The controller 230 may also communicate with the vehicle 100 (e.g., the controller 130) to convey the locations and codes of the active targets 210 and the locations of the passive targets 220. The active targets 210 and passive targets 220 are selected and positioned based on the particular vehicle 100 configuration (i.e., the location and numbers of the optical sensors) and occlusions 215 in the station 200. Once positioned, the location of each active target 210 and passive target 220 within the station 200 is known.
After the active targets 210 and passive targets 220 are positioned and the illumination of the active targets 210 is initiated according to a controlled code, images are obtained with the optical sensors 115 as the vehicle 100 moves along the path 205 through the station 200. According to an exemplary embodiment, the vehicle 100 may instead be stationary at a known position 207 (indicated by the “X” that would align with the front of the vehicle 100) along a path 205 through the station 200. The images obtained by the optical sensors 115 are processed to identify features (e.g., the active targets 210 and passive targets 220). The feature identification facilitates automatic alignment according to known techniques. The processes are further discussed with reference to
For example, if the active target 210b is occluded by the occlusion 215a, the active targets 210a and 210c will be captured in images obtained by the optical sensor 115, along with active targets 210d and 210e. The occlusion of active target 210b may occur because the optical sensor 115 (on the moving vehicle 100 (
At block 420, determining coding of the active targets 210 may refer to controlling a pattern, duty cycle, or frequency (i.e., coding relating to the visible states of the active targets 210). Control of the active targets 210 may be performed by the controller 230, as noted with reference to
For example, a pattern of illumination for a set or subset of the active targets 210 in the field of view of an optical sensor 115 may be coordinated. The known pattern may then be used to identify the active targets 210 of the set or subset in images obtained by the optical sensor 115. Identifying the set or subset of active targets 210 may coarsely indicate a location within the station 200 of the active targets 210. The duty cycle of one or more active targets 210 may also be controlled. For example, the duty cycle may be reduced over time. In this case, images obtained during high duty cycles may include an area that is saturated due to the high intensity of the illumination of the active target 210. This area of saturation acts as a rough estimate of the location of the active target 210. As the duty cycle is reduced, a passive target 220 within the same field of view may become visible and may be used for more precise locating.
The frequency of one or more active targets 210 may be controlled, as well. The frequency may refer to how often a given active target 210 is illuminated. For example, a first active target 210 may switch between being illuminated and unilluminated every two frame durations of the optical sensors 115 while a second active target 210 may switch between being illuminated and unilluminated every one frame duration. This frequency can be determined by examining the active target 210 over a number of images and may facilitate identification of the first active target 210 versus the second active target 220, for example. The frequency may be mapped to a particular location or an identify, which in turn is associated with a particular location for the active target 210. Alternately, frequency may refer to a frequency-coded pattern of illumination (e.g., on-off-on-on over a number of image frames). This frequency code may also be mapped to a particular location or identify, which in turn is associated with a particular location for the active target 210. The frequency coding can convey any information as long as the code is known (i.e., a particular frequency-coded pattern of illumination is mapped to particular information).
At block 430, initiating coding refers to starting the operation of the active targets 210 according to the controlled pattern, duty cycle, and/or frequency established at block 420. At block 440, obtaining images with the optical sensors 115 refers to obtaining the images while the vehicle 100 is moving along the path 205 (e.g., along an assembly line in the station 200) or while the vehicle 100 is stationary at a known location 207 along the path 205 where the targets 210, 220 are positioned (at block 410). At block 450, identifying active targets 210 and passive targets 220 using the images obtained at block 440 may involve a set of processes. These processes involve image processing, which may be performed by the controller within each optical sensor 115, by the controller 130, or by a combination of the two. Generally, as previously noted, the active targets 210 may provide a coarse identification of the area of images that include the active targets 210 while the passive targets 220 facilitate more refined identification.
The feature detection required by the known alignment techniques refers to identifying a location, within the images (obtained at block 440), of the active targets 210 and passive targets 220. Tracking of a feature over a set of images and associating features among images is also part of known alignment techniques. Because the active targets 210, according to one or more embodiments, facilitate not only identification of a given active target 210 but also determination of its location within the station 200, as previously noted, the feature detection, tracking and association is enhanced through the use of the active targets. For example, if the vehicle 100 is at the known location 207 and an active target 210 is identified in an image and its location in the station 200 and its location in the coordinate system of the vehicle 100 is known, then aligning the optical sensor 115 that obtained the image is fairly straight-forward according to known techniques. The processes involved in the identification (i.e., feature detection) at block 450 are further discussed.
As part of the processing at block 450, the active targets 210 may be identified in the images first. The identification of the active targets 210 is enhanced by the pattern, duty cycle, and/or frequency coding described with reference to block 420. As previously noted, if the duty cycle of a given active target 210 is high, image saturation may occur such that only a rough estimate of the location of the given active target 210 may be obtained in an image. Once an active target 210 is identified based on the coding, its location within the station 200 is known. Thus, as another process at block 450, the location of passive targets 220 may be inferred based on the location of nearby active targets 210. An iterative process of obtaining images and performing image processing may be performed to refine the location estimation of active targets 210 and passive targets 220.
At block 460, performing alignment may refer to different types of alignment. For example, alignment at block 460 may refer to converting the coordinate system of one optical sensor 115 to that of another. This alignment may be needed to perform sensor fusion, for example. The alignment at block 460 may, instead, convert the coordinate system of an optical sensor 115 to the coordinate system of the vehicle 100. Regardless of what an optical sensor 115 is being aligned with, identification of active targets 210 and passive targets 220 at block 450 (i.e., the feature detection) and the knowledge of their location within the station 200 may be used with known techniques.
For example, if the vehicle is moving along an assembly line on the path 205 rather than at a known location 207, structure-from-motion (SfM) or motion stereo techniques may be employed. Both SfM and motion stereo facilitate extraction of three-dimensional information from a series of images obtained by a moving optical sensor 115. The three-dimensional information facilitates feature identification through the determination of common points (e.g., active target 210) among the images. The known alignment techniques that are facilitated by the feature identification are not detailed herein.
While the above disclosure has been described with reference to exemplary embodiments, it will be understood by those skilled in the art that various changes may be made and equivalents may be substituted for elements thereof without departing from its scope. In addition, many modifications may be made to adapt a particular situation or material to the teachings of the disclosure without departing from the essential scope thereof. Therefore, it is intended that the present disclosure not be limited to the particular embodiments disclosed, but will include all embodiments falling within the scope thereof.