System and method for evaluating the perception system of an autonomous vehicle

Information

  • Patent Grant
  • 11747809
  • Patent Number
    11,747,809
  • Date Filed
    Wednesday, July 28, 2021
    2 years ago
  • Date Issued
    Tuesday, September 5, 2023
    8 months ago
Abstract
A method and apparatus are provided for optimizing one or more object detection parameters used by an autonomous vehicle to detect objects in images. The autonomous vehicle may capture the images using one or more sensors. The autonomous vehicle may then determine object labels and their corresponding object label parameters for the detected objects. The captured images and the object label parameters may be communicated to an object identification server. The object identification server may request that one or more reviewers identify objects in the captured images. The object identification server may then compare the identification of objects by reviewers with the identification of objects by the autonomous vehicle. Depending on the results of the comparison, the object identification server may recommend or perform the optimization of one or more of the object detection parameters.
Description
BACKGROUND

Autonomous vehicles use various computing systems to aid in the transport of passengers from one location to another. Some autonomous vehicles may require some initial input or continuous input from an operator, such as a pilot, driver, or passenger. Other systems, such as autopilot systems, may be used only when the system has been engaged, which permits the operator to switch from a manual mode (where the operator exercises a high degree of control over the movement of the vehicle) to an autonomous mode (where the vehicle essentially drives itself) to modes that lie somewhere in between.


Such vehicles are equipped with various types of sensors in order to detect objects in the surroundings. For example, autonomous vehicles may include lasers, sonar, radar, cameras, and other devices that scan and record data from the vehicle's surroundings. These devices in combination (and in some cases alone) may be used determine the location of the object in three-dimensional space.


In determining whether there is an object near the autonomous vehicle, the computing systems may perform numerous calculations using a number of parameters. Adjustments to these parameters may affect the performance of the computing systems. For example, the adjustments may decrease the likelihood that the computing systems determine the presence of a given object or increase the likelihood that the computing systems do not detect the presence of an object, such as a car, traffic light, or pedestrian.


BRIEF SUMMARY

An apparatus for optimizing object detection performed by an autonomous vehicle is disclosed. In one embodiment, the apparatus includes a memory operative to store a first plurality of images captured by an autonomous vehicle and a second plurality of images, corresponding to the first plurality of images, in which an object label has been applied an object depicted in an image of the second plurality of images. The apparatus may also include a processor in communication with the memory, where the processor operative to receive the first plurality of images from the autonomous vehicle and display a first image from the first plurality of images, wherein the first image comprises an object. The processor may also be operative to receive the object label for the object displayed in the first image from the first plurality of images to obtain a first image from the second plurality of images, compare the received object label with an object label applied by the autonomous vehicle to the object in the first image in the first plurality of images, and determine whether the received object label corresponds to the object label applied by the autonomous vehicle.


In another embodiment of the apparatus, the first plurality of images comprise a first plurality of images captured by a first sensor of a first sensor type and a second plurality of images captured by a second sensor of a second sensor type.


In a further embodiment of the apparatus, wherein the first sensor comprises a camera and the second sensor comprises a laser.


In yet another embodiment of the apparatus, the first plurality of images captured by the first sensor are images captured from a forward perspective of the autonomous vehicle.


In yet a further embodiment of the apparatus, the second plurality of images captured by the second sensor are images captured from a panoramic perspective of the autonomous vehicle.


In another embodiment of the apparatus, the at least one object label comprises a plurality of parameters that define the object label, and the plurality of parameters depend on an image sensor type used to capture the first image from the first plurality of images captured by the autonomous vehicle.


In a further embodiment of the apparatus, the processor is further operative to determine whether the received object label corresponds to the object label applied by the autonomous vehicle by determining whether the object label applied by the autonomous vehicle overlaps any portion of the received object label.


In yet another embodiment of the apparatus, the processor is further operative to determine whether the received object label corresponds to the object label applied by the autonomous vehicle by determining an object identification ratio derived from the received object label and the object label applied by the autonomous vehicle.


In yet a further embodiment of the apparatus, the processor is operative to determine whether the received object label corresponds to the object label applied by the autonomous vehicle based on a first area represented by the intersection of an area of the received object label with an area of the object label applied by the autonomous vehicle, and a second area represented by the union of the area of the received object label with the area of the object label applied by the autonomous vehicle.


In another embodiment of the apparatus, the object label applied by the autonomous vehicle is based on a plurality of object detection parameters, and the processor is further operative to optimize the plurality of object detection parameters when the indication of the correspondence between the received object label and the object label applied by the autonomous vehicle does not exceed a predetermined correspondence threshold.


A method for optimizing object detection performed by an autonomous vehicle is also disclosed. In one embodiment, the method includes storing, in a memory, a first plurality of images captured by an autonomous vehicle and displaying, with a processor in communication with a memory, a first image from the first plurality of images, wherein the first image comprises an object. The method may also include receiving an object label for the object displayed in the first image from the first plurality of images to obtain a first image for a second plurality of images, and comparing the received object label with an object label applied by the autonomous vehicle to the object in the first image in the first plurality of images, and determining whether the received object label corresponds to the object label applied by the autonomous vehicle.


In another embodiment of the method, the first plurality of images comprise a first plurality of images captured by a first sensor of a first sensor type and a second plurality of images captured by a second sensor of a second sensor type.


In a further embodiment of the method, the first sensor comprises a camera and the second sensor comprises a laser.


In yet another embodiment of the method, the first plurality of images captured by the first sensor are images captured from a forward perspective of the autonomous vehicle.


In yet a further embodiment of the method, the second plurality of images captured by the second sensor are images captured from a panoramic perspective of the autonomous vehicle.


In another embodiment of the method, the at least one object label comprises a plurality of parameters that define the object label, and the plurality of parameters depend on an image sensor type used to capture the first image from the first plurality of images captured by the autonomous vehicle.


In a further embodiment of the method, determining whether the received object label corresponds to the object label applied by the autonomous vehicle comprises determining whether the object label applied by the autonomous vehicle overlaps any portion of the received object label.


In yet another embodiment of the method, determining whether the received object label corresponds to the object label applied by the autonomous vehicle comprises determining an object identification ratio derived from the received object label and the object label applied by the autonomous vehicle.


In yet a further embodiment of the method, determining whether the received object label corresponds to the object label applied by the autonomous vehicle is based on a first area represented by the intersection of an area of the received object label with an area of the object label applied by the autonomous vehicle, and a second area represented by the union of the area of the received object label with the area of the object label applied by the autonomous vehicle.


In another embodiment of the method, the object label applied by the autonomous vehicle is based on a plurality of object detection parameters, and the method further comprises optimizing the plurality of object detection parameters when the indication of the correspondence between the received object label and the object label applied by the autonomous vehicle does not exceed a predetermined correspondence threshold.


A further apparatus for optimizing object detection performed by an autonomous vehicle is also disclosed. In one embodiment, the apparatus includes a memory operative to store a plurality of images captured by an autonomous vehicle using object detection parameters, a first plurality of object label parameters determined by the autonomous vehicle, and a second plurality of object label parameters applied by a reviewer having reviewed the plurality of images captured by the autonomous vehicle. The apparatus may also include a processor in communication with the memory, the processor operative to determine whether to optimize the plurality of object detection parameters based on a comparison of the first plurality of object label parameters with the second plurality of object label parameters, and perform an operation on the plurality of object detection parameters based on the comparison of the first plurality of object label parameters with the second plurality of object label parameters.


The operation performed on the plurality of object detection parameters may include identifying a plurality of object detection values, wherein each object detection value corresponds to at least one object detection parameter in the plurality of object detection parameters. For each possible combination of the plurality of object detection values, the operation may include performing an object detection routine on the plurality of images captured by the autonomous vehicle using the plurality of object detection values. The operation may also include selecting the combination of plurality of object detection values that resulted in an optimal object detection routine.





BRIEF DESCRIPTION OF THE DRAWINGS

The patent or application file contains at least one drawing executed in color. Copies of this patent or patent application publication with color drawing(s) will be provided by the Office upon request and payment of the necessary fee.



FIG. 1 illustrates an example of an apparatus for optimizing one or more object detection parameters according to aspects of the disclosure.



FIG. 2 illustrates an example of the placement of one or more sensors on an autonomous vehicle according to aspects of the disclosure.



FIGS. 3A-3D illustrates various views of the approximate sensor fields of the various sensors on the autonomous vehicle according to aspects of the disclosure.



FIG. 4 is a raw camera image captured by a camera mounted on the autonomous vehicle according to aspects of the disclosure.



FIG. 5 is a laser point cloud image of the view shown in FIG. 4 according to aspects of the disclosure.



FIG. 6 is another raw camera image captured by a camera mounted on the autonomous vehicle according to aspects of the disclosure.



FIG. 7 is a laser point cloud image of the view shown in FIG. 6 according to aspects of the disclosure.



FIG. 8 is yet a further raw camera image captured by a camera mounted on the autonomous vehicle according to aspects of the disclosure.



FIG. 9 is a laser point cloud image of the view shown in FIG. 8 according to aspects of the disclosure.



FIG. 10 illustrates one example of an object identification server according to aspects of the disclosure.



FIG. 11 is a raw camera image that includes applied object labels according to aspects of the disclosure.



FIG. 12 is a laser point cloud image having applied object labels of the view shown in FIG. 11 according to aspects of the disclosure.



FIG. 13 is another raw camera image that includes an applied object label according to aspects of the disclosure.



FIG. 14 is a laser point cloud image having an applied object label of the view shown in FIG. 13 according to aspects of the disclosure.



FIG. 15 is yet another raw camera image that includes applied object labels according to aspects of the disclosure.



FIG. 16 is a laser point cloud image having applied object labels of the view shown in FIG. 15 according to aspects of the disclosure.



FIG. 17 illustrates one example of logic flow for optimizing object detection parameters according to aspects of the disclosure.





DETAILED DESCRIPTION

This disclosure provides for an apparatus and method directed to optimizing one or more object detection parameters used by a computing system on an autonomous vehicle. In particular, this disclosure provides for an apparatus and method of optimizing the one or more object detection parameters by comparing the identification of objects by the computing system in the autonomous vehicle with the identification of objects by one or more reviewers. The reviewers may review the raw images captured by the autonomous vehicle and the reviewers may manually label the objects depicted in the raw images. By “raw” image, it is meant that the image may not have been marked upon or modified by the autonomous vehicle. In other words, a “raw image” may be an image as captured by a sensor without markings that would alter the view depicted therein. As discussed with reference to FIG. 10 below, a reviewer may use the object identification server to create electronic object labels on the raw images captured by the autonomous vehicle. Captured images having been electronically marked with object labels may not be considered raw images.


The manual object labels may then be compared with object labels applied by the computing system of the autonomous vehicle to determine whether the one or more object detection parameters should be optimized. In this manner, the disclosed apparatus and method increases the likelihood that the computing system of the autonomous vehicle recognizes an object depicted in one or more raw images.



FIG. 1 illustrates an apparatus 102 for optimizing the one or more object detection parameters. In one embodiment, the apparatus may include an autonomous vehicle 104 configured to communicate with an object identification server 132. The autonomous vehicle 104 may be configured to operate autonomously, e.g., drive without the assistance of a human driver. Moreover, the autonomous vehicle 104 may be configured to detect various objects and determine the types of detected objects while the autonomous vehicle 104 is operating autonomously.


While certain aspects of the disclosure are particularly useful in connection with specific types of vehicles, the autonomous vehicle 104 may be any type of vehicle including, but not limited to, cars, trucks, motorcycles, busses, boats, airplanes, helicopters, lawnmowers, recreational vehicles, amusement park vehicles, farm equipment, construction equipment, trams, golf carts, trains, and trolleys.


The autonomous vehicle 104 may be equipped with various types of sensors 106 for detecting objects near and/or around with the autonomous vehicle 104. For example, the autonomous vehicle 104 may be equipped with one or more cameras 112 for capturing images of objects in front of and/or behind the autonomous vehicle 104. As another example, the autonomous vehicle 104 may be equipped with one or more lasers 114 for detecting objects near and/or around the autonomous vehicle 104. Moreover, the autonomous vehicle 104 may be equipped with one or more radars 116 for detecting objects near and/or around the autonomous vehicle 104.


While FIG. 1 illustrates that the autonomous vehicle 104 may be equipped with one or more cameras 112, one or more lasers 114, and one or more radars 116, the autonomous vehicle 104 may be equipped with alternative arrangements of sensors. For example, the autonomous vehicle 104 may be equipped with sonar technology, infrared technology, accelerometers, gyroscopes, magnometers, or any other type of sensor for detecting objects near and/or around the autonomous vehicle 104.


The autonomous vehicle 104 may also include a memory 108 and a processor 110 operative to capture raw images using the sensors 106. While shown as a single block, the memory 108 and the processor 110 may be distributed across many different types of computer-readable media and/or processors. The memory 108 may include random access memory (“RAM”), read-only memory (“ROM”), hard disks, floppy disks, CD-ROMs, flash memory or other types of computer memory.


Although FIG. 1 functionally illustrates the processor 110, the memory 108, and other elements of the autonomous vehicle 104 as being within the same block, it will be understood by those of ordinary skill in the art that the processor 110, the memory 108, and the sensors 106 may actually comprise multiple processors, computers, or memories that may or may not be stored within the same physical housing.


The memory 108 may be operative to store one or more images 118-122 captured by one or more of the sensors 106. The captured raw images may include raw camera images 118 captured using the one or more cameras 112, laser point cloud images 120 captured using the one or more lasers 114, or radar intensity images 122 captured using one or more radars. Depending on the type of sensors used by the autonomous vehicle 104, the memory 108 may store other types of images as well.


The images 118-122 may be formatted in any computer-readable format. For example, the images 118-122 data may be stored as bitmaps comprised of grids of pixels that are stored in accordance with formats that are compressed or uncompressed, lossless (e.g., BMP) or lossy (e.g., JPEG), and bitmap or vector-based (e.g., SVG), as well as computer instructions for drawing graphics.


The raw camera images 116 may include one, two, or three-dimensional images having a predetermined number of megapixels. The raw camera images 116 may further be in color, black and white, or in any other format. The one or more cameras 112 may be operative to capture the one or more raw camera image(s) 118 at predetermined time intervals, such as every one millisecond, every second, every minute, or at any other interval of time. Other measurements of capturing images may also be possible, such as 30 frames per second (“fps”) 60 fps, or any other measurement.


The laser point cloud images 120 may include one or more images comprised of laser points representing a predetermined view angle near and/or around the autonomous vehicle 104. For example, the laser point cloud images 120 may include one or more laser point cloud images representing a 360° view around the autonomous vehicle 104. The laser point cloud images 120 may include a predetermined number of laser points, such as 50,000 laser points, 80,000 laser points, 100,00 laser points, or any other number of laser points. As with the raw camera images 118, the autonomous vehicle 104 may be configured to capture the one or more laser point cloud images 120 at predetermined time intervals, such as 10 fps, 30 fps, every millisecond, every second, or at any other interval of time.


The radar intensity images 122 may include one or more images captured using a radar technology. As with the laser point cloud images 120 or the raw camera images 116, the radar intensity images 122 may be captured at predetermined time intervals.


Although the sensors 106 may be configured to capture images at predetermined time intervals, the predetermined time intervals may vary from sensor to sensor. Thus, the one or more camera(s) 112 may be configured to capture one or more raw images 118 at a time interval different than the one or more laser(s) 114, which may also capture one or more laser point cloud images 120 at a time interval different than the radar(s) 116. Hence, it is possible that the autonomous vehicle 104 is capturing an image, whether using the camera(s) 112, the laser(s) 114, or the radar(s) 116 at any given time.


The autonomous vehicle 104 may also include a processor 110 operative to control the sensors 106 to capture the one or more images 118-122. The processor 110 may be any conventional processor, such as commercially available central processing units (“CPUs”). As one example, the processor 110 may be implemented with a microprocessor, a microcontroller, a digital signal processor (“DSP”), an application specific integrated circuit (ASIC), discrete analog or digital circuitry, or a combination of other types of circuits or logic.


The memory 108 may also be operative to store an object detector 130. The object detector 130 may be any configuration of software and/or hardware configured to detect an object in an image 118-122 captured by one or more of the sensors 106. As an image is captured by one or more of the sensors 106, the image may be communicated to the object detector 130, which may analyze the image to determine whether there is an object present in the image. The object in the captured image may be any type of object, such as a vehicle, pedestrian, a road sign, a traffic light, a traffic cone, or any other type of object.


To determine whether an object is present in the image undergoing processing, the object detector 130 may leverage one or more image parameters 124-128. The image parameters 124-128 may instruct the object detector 130 when an arrangement of pixels, laser points, intensity maps, etc., should be considered an object. The image parameters 124-128 may also instruct the object detector 130 as how to classify the object.


Each of the sensor types may be associated with a corresponding set of image parameters. Thus, the one or more camera(s) 112 may be associated with camera parameters 124, the one or more laser(s) 114 may be associated with laser parameters 126, and the one or more radar(s) 116 may be associated with radar parameters 128. Examples of camera parameters 124 may include the minimal brightness of a pedestrian, the minimum pixel size of a car object, the minimum width of a car object, and other such parameters. Examples at laser parameters 126 may include the height of a pedestrian, the length of a car object, an obstacle detection threshold, and other such parameters. Examples of radar parameters 128 may include minimum distance to an object, a delay threshold for detecting an object, the height of a pedestrian, and other such parameters.


As discussed with reference to FIGS. 12-18, when the object detector 130 detects an object in an image, the object detector 130 may define an object label for the detected object. The object label may be defined by a bounding box encompassing the object. In alternative embodiments, the object label may be defined by a bounding oval or other bounding shape.


The object label may have one or more object label parameters that define the shape of the object label. Moreover, the object label parameters may vary depending on the sensor type of the sensor that captured the image. Assuming that the shape of the object label is a bounding box, and that the sensor that captured the image is a one or more of the cameras 112, the object label parameters may include a height parameter that defines the height of the bounding box (in pixels), a width parameter that defines the width of the bounding box (in pixels), a first pixel coordinate that defines the latitudinal placement of the bounding box (e.g., an X-coordinate), and a second pixel coordinate that defines the longitudinal placement of the bounding box (e.g., a Y-coordinate). Where the sensor that captured the image is one or more of the lasers 126, the object label parameters may also include a third pixel coordinate that defines the physical height of the object or a particular laser point depicted in the captured image (e.g., a Z-coordinate). This third pixel coordinate should not be confused with the height parameter of the object label, because this third pixel coordinate may indicate the elevation of the detected object or of a given laser point (e.g., 3 meters above sea level, 2 meters above sea level, etc.) This third pixel coordinate may further indicate the height of the detected object or laser point relative to the autonomous vehicle 104.


In addition, the object label applied by the object detector 130 may be associated with an image frame number that identifies the image in which the detected object may be located. As a moving object may be located in a number of images, such as a moving vehicle captured by one or more of the cameras 112, the moving object may appear in different locations in different images. Hence, the moving object may have a number of different object labels associated with it, and each of the object labels may be associated with a corresponding image number to identify the location of the moving object across multiple images.


The autonomous vehicle 104 may also be in communication with an object identification server 132. The object identification server 132 may be operative to verify the objects detected by the autonomous vehicle 104 using the object detector 130. Moreover, the object identification server 132 may facilitate the optimization of one or more of the parameters 124-128 used by the object detector 130 to detect objects in the captured images 118-122. In one embodiment, the autonomous vehicle 104 may communicate the object labels, and their corresponding object label parameters, to the object identification server 132 for verifying that the object labels were correctly, or substantially correctly, applied to objects appearing in one or more of the captured images 118-122. The implementation of the object identification server 132 is discussed with reference to FIG. 12.


The object identification server 132 may also be in communication with one or more client devices 134-138 via a network 142. The networks 140-142 may be implemented as any combination of networks. Moreover, the networks 140-142 may be the same network. The networks 140-142 may also be various types of networks. As examples, the networks 140-142 may be a Wide Area Network (“WAN”), such as the Internet; a Local Area Network (“LAN”); a Personal Area Network (“PAN”), or a combination of WANs, LANs, and PANs. Moreover, the networks 122-128 may involve the use of one or more wired protocols, such as the Simple Object Access Protocol (“SOAP”); wireless protocols, such as 802.11a/b/g/n, Bluetooth, or WiMAX; transport protocols, such as TCP or UDP; an Internet layer protocol, such as IP; application-level protocols, such as HTTP, a combination of any of the aforementioned protocols, or any other type of protocol.


The client devices 134-138 may be operated by a reviewer that may review one or more of the object labels applied by the object detector 130. The client devices 134-138 in communication with the object identification server 132 may be any type of client device. As examples, and without limitation, the client devices 134-138 may include one or more desktop computers and one or more mobile devices. Examples of a mobile device include a laptop, a Personal Digital Assistant (“PDA”), a tablet computer, or other such mobile device. Accordingly, a review may communicate and interact with the object identification server 132 regardless of whether the client devices 134-138 are desktop computers, mobile devices (e.g., laptops, smartphones, PDAs, etc.), or any other such client device.


The one or more reviewers may also review one or more of the captured images 118-122 and may manually apply object labels to objects appearing in the one or more captured images 118-122. As discussed below with reference to FIG. 12, the object identification server 132 may compare the manually applied object labels with the object labels applied by the object detector 130 of the autonomous vehicle 104 to optimize one or more of the object detection parameters 124-128.


In addition, while the object identification server 132 is shown separately from the client devices 134-138, a reviewer may use the object identification server 132 without a client device. In other words, the object identification server 132 may be a desktop computer usable by the reviewer without an intermediary client device.



FIG. 2 illustrates one example of the autonomous vehicle 104 and the placement of the one more sensors 106. The autonomous vehicle 104 may include lasers 210 and 211, for example, mounted on the front and top of the autonomous vehicle 104, respectively. The laser 210 may have a range of approximately 150 meters, a thirty degree vertical field of view, and approximately a thirty degree horizontal field of view. The laser 211 may have a range of approximately 50-80 meters, a thirty degree vertical field of view, and a 360 degree horizontal field of view. The lasers 210-211 may provide the autonomous vehicle 104 with range and intensity information that the processor 110 may use to identify the location and distance of various objects. In one aspect, the lasers 210-211 may measure the distance between the vehicle and the object surfaces facing the vehicle by spinning on their axes and changing their pitch.


The autonomous vehicle 104 may also include various radar detection units, such as those used for adaptive cruise control systems. The radar detection units may be located on the front and back of the car as well as on either side of the front bumper. As shown in the example of FIG. 2, the autonomous vehicle 104 includes radar detection units 220-223 located on the side (only one side being shown), front and rear of the vehicle. Each of these radar detection units 220-223 may have a range of approximately 200 meters for an approximately 18 degree field of view as well as a range of approximately 60 meters for an approximately 56 degree field of view.


In another example, a variety of cameras may be mounted on the autonomous vehicle 104. The cameras may be mounted at predetermined distances so that the parallax from the images of two or more cameras may be used to compute the distance to various objects. As shown in FIG. 2, the autonomous vehicle 104 may include two cameras 230-231 mounted under a windshield 340 near the rear view mirror (not shown).


The camera 230 may include a range of approximately 200 meters and an approximately 30 degree horizontal field of view, while the camera 231 may include a range of approximately 100 meters and an approximately 60 degree horizontal field of view.


Each sensor may be associated with a particular sensor field in which the sensor may be used to detect objects. FIG. 3A is a top-down view of the approximate sensor fields of the various sensors. FIG. 3B depicts the approximate sensor fields 310 and 311 for the lasers 210 and 211, respectively based on the fields of view for these sensors. For example, the sensor field 310 includes an approximately 30 degree horizontal field of view for approximately 150 meters, and the sensor field 311 includes a 360 degree horizontal field of view for approximately 80 meters.



FIG. 4C depicts the approximate sensor fields 320A-323B and for radar detection units 220-223, respectively, based on the fields of view for these sensors. For example, the radar detection unit 220 includes sensor fields 320A and 320B. The sensor field 320A includes an approximately 18 degree horizontal field of view for approximately 200 meters, and the sensor field 320B includes an approximately 56 degree horizontal field of view for approximately 80 meters. Similarly, the radar detection units 221-223 include the sensor fields 321A-323A and 321B-323B. The sensor fields 321A-323A include an approximately 18 degree horizontal field of view for approximately 200 meters, and the sensor fields 321B-323B include an approximately 56 degree horizontal field of view for approximately 80 meters. The sensor fields 321A and 322A extend passed the edge of FIG. 3A and 3C.



FIG. 3D depicts the approximate sensor fields 330-331 of cameras 230-231, respectively, based on the fields of view for these sensors. For example, the sensor field 330 of the camera 230 includes a field of view of approximately 30 degrees for approximately 200 meters, and sensor field 331 of the camera 231 includes a field of view of approximately 60 degrees for approximately 100 meters.


In general, an autonomous vehicle 104 may include sonar devices, stereo cameras, a localization camera, a laser, and a radar detection unit each with different fields of view. The sonar may have a horizontal field of view of approximately 60 degrees for a maximum distance of approximately 6 meters. The stereo cameras may have an overlapping region with a horizontal field of view of approximately 50 degrees, a vertical field of view of approximately 10 degrees, and a maximum distance of approximately 30 meters. The localization camera may have a horizontal field of view of approximately 75 degrees, a vertical field of view of approximately 90 degrees and a maximum distance of approximately 10 meters. The laser may have a horizontal field of view of approximately 360 degrees, a vertical field of view of approximately 30 degrees, and a maximum distance of 100 meters. The radar may have a horizontal field of view of 60 degrees for the near beam, 30 degrees for the far beam, and a maximum distance of 200 meters. Hence, the autonomous vehicle 104 may be configured with any arrangement of sensors, and each of these sensors may capture one or more raw images for use by the object detector 130 to detect the various objects near and around the autonomous vehicle 104.



FIGS. 4-9 are examples of various images that may be captured by one or more sensors 106 mounted on the autonomous vehicle 104. FIG. 4 is a first example of a raw camera image 402 captured by one or more of the cameras 112. FIG. 5 is a first example of a laser point cloud image 502 of the view shown in the first raw camera image 402. FIG. 6 is a second example of a raw camera image 602 captured by one or more of the cameras 112. Similarly, FIG. 7 is an example of a laser point cloud image 702 of the view shown in the raw camera image 602. FIG. 8 is yet another example of a raw camera image 802, and FIG. 9 is an example of a laser point cloud image 902 of the view shown in the raw camera image 802.


As shown in the examples of FIGS. 5, 7, and 9, a laser point cloud image may substantially or approximately correspond to a raw camera image captured by a camera. Moreover, FIGS. 5, 7, and 9 demonstrate that the autonomous vehicle 104 may be configured to capture more than one type of laser point cloud image. The autonomous vehicle 104 may be similarly configured to capture other types of perspectives using other types of sensors as well (e.g., a panoramic image from a camera).


As the autonomous vehicle 104 is capturing the one or more images 118-122, the object detector 130 may be analyzing the images to determine whether there are objects present in the captured images 118-122. As mentioned previously, the object detector 130 may leverage one or more object detection parameters 124-128 in determining whether an object is present in the image. To verify or improve the accuracy of detecting objects by the object detector 130, the autonomous vehicle 104 may also communicate one or more of the captured images 118-122 to the object identification server 132. Communicating the captured images 118-122 to the object identification server 132 may occur at any time, such as while the autonomous vehicle 104 is capturing the one or images 124-128, after the autonomous vehicle 104 has captured the one or more images 124-128, or at any other time.



FIG. 10 illustrates one example of the object identification server 132 according to aspects of the disclosure. The object identification server 132 may include a memory 1002 and a processor 1004. The memory 1002 may include random access memory (“RAM”), read-only memory (“ROM”), hard disks, floppy disks, CD-ROMs, flash memory or other types of computer memory. In addition, the memory 1002 may be distributed across many different types of computer-readable media.


The processor 1004 may be a microprocessor, a microcontroller, a DSP, an ASIC, discrete analog or digital circuitry, or a combination of other types of circuits or logic. In addition, the processor 1004 may be distributed across many different types of processors.


Interfaces between and within the object identification server 132 may be implemented using one or more interfaces, such as Web Services, SOAP, or Enterprise Service Bus interfaces. Other examples of interfaces include message passing, such as publish/subscribe messaging, shared memory, and remote procedure calls.


The memory 1002 may be operative to store one or more databases. For example, the memory 1002 may store a raw sensor image database 1006, an autonomous vehicle object identification database 1008, and a reviewer object identification database 1010. One or more of the databases 1006-1010 may be implemented in any combination of components. For instance, although the databases 1006-1010 are not limited to any single implementation, one or more of the databases 1006-1010 may be stored in computer registers, as relational databases, flat files, or any other type of database.


Although shown as a single block, the object identification server 132 may be implemented in a single system or partitioned across multiple systems. In addition, one or more of the components of the object detection server 132 may be implemented in a combination of software and hardware. In addition, any one of the components of the object identification server 132 may be implemented in a computer programming language, such as C#, C++, JAVA or any other computer programming language. Similarly, any one of these components may be implemented in a computer scripting language, such as JavaScript, PHP, ASP, or any other computer scripting language. Furthermore, any one of these components may be implemented using a combination of computer programming languages and computer scripting languages.


The raw sensor image database 1006 may store one or more of the images communicated by the autonomous vehicle 104 to the object identification server 132. Accordingly, the raw sensor image database 1006 may include images 1012 captured by one or more of the cameras 112, images 1014 captured by one or more of the lasers 114, and images 1016 captured by one or more of the radars 116. The images 1012-1016 may be formatted in any computer-readable format. For example, the images 1012-1016 data may be stored as bitmaps comprised of grids of pixels that are stored in accordance with formats that are compressed or uncompressed, lossless (e.g., BMP) or lossy (e.g., JPEG), and bitmap or vector-based (e.g., SVG), as well as computer instructions for drawing graphics. Moreover, the images 1012-1016 stored in the raw sensor image database 1006 may correspond to, or be copies of, the images 118-122 stored in the memory 108 of the autonomous vehicle 104.


The autonomous vehicle object identification database 1008 may include the object label parameters determined by the object detector 130 for the objects appearing in the one or more images 1012-1016. Thus, in one embodiment, for each object label applied to each object detected by the object detector 130, the object identification server 132 may store the set of parameters that define each of the object labels. As a set of object label parameters define an object label, the autonomous vehicle object identification database 1008 may be considered to effectively store object labels.


In addition, the autonomous vehicle object identification database 1008 may store object labels for each type of image. Thus, the autonomous vehicle object identification database 1008 may store object labels 1018 for the camera images 1012, object labels 1020 for the laser point cloud images 1014, and object labels 1022 for the radar images 1016.


The memory 1002 may also store a reviewer object identification application 1024 executable by the processor 1004. A reviewer may use the reviewer object identification application 1024 to identify objects (e.g., apply object labels to objects) appearing in the images 1012-1016 stored in the raw sensor image database 1006. A reviewer may include a human reviewer or a computerized reviewer operative to communicate with the reviewer object identification application 1024. While the reviewer object identification application 1024 is shown as residing in the memory 1010 of the object identification server 132, the object identification application 1024 may alternatively reside in the memory of a client device in communication with the object identification server 132.


To apply object labels to object, the reviewer object identification application 1024 may display each image to the reviewer. The reviewer may then draw an object label, such as a bounding box or other shape, around an object that the autonomous vehicle 104 should recognize or detect. The reviewer may also provide identification information for the identified object, such as an object name (e.g., “vehicle,” “bicycle,” “pedestrian,” etc.). Alternatively, the object name may be selectable by the reviewer, such as being selectable as part of a drop-down menu or other graphical menu. The reviewer object identification application 1024 may then store the object label parameters that define the object label in the reviewer object identification database 1010. As discussed previously with regard to the object detector 130, the object label parameters may include a width parameter, a height parameter, an X-parameter, a Y-parameter, an image number parameter, and, where the image undergoing review is a laser point cloud image, a Z-parameter.


In addition, the reviewer object identification application 1024 may employ interpolation techniques to reduce the strain on the reviewer of identifying objects. For example, the reviewer may identify (e.g., by electronically drawing a bounding box around an object using a mouse or other input device) an object appearing in a first image, and the reviewer may identify the object appearing in a last image. The reviewer object identification application 1024 may then interpolate the object label parameters for the object appearing in images between the first image and the last image. Thus, in instances where an object, such as a moving vehicle traveling alongside the autonomous vehicle 104, appears in hundreds or thousands of images, the reviewer object identification application 1024 may reduce the time and effort required by the reviewer to identify the object in each image.


The reviewer object identification database 1010 may store the object label parameters determined by the reviewer object identification application 1024. The object label parameters may include object label parameters 1026 for raw camera images, object label parameters 1028 for laser point cloud images, and object label parameters 1030 for radar intensity images. The reviewer object identification database 1010 may store object label parameters for other types of images and/or sensors as well, such as object label parameters for sonar images, infrared images, or any other type of image.



FIGS. 11-16 are examples of object labels electronically applied to the images captured by the sensors of the autonomous vehicle 104. The object labels shown in FIGS. 11-16 have a rectangular shape, but any other shape is also possible. The object labels shown in FIGS. 11-16 are examples of object labels that may be applied by the object detector 130 of the autonomous vehicle 104 or may have been applied by a reviewer using the reviewer object identification application 1024.


Moreover, the object detector 130 may, as opposed to electronically drawing the object labels on the images, store the object label parameters that define the object label. In contrast, for expediency, a reviewer may draw an object label around an object using an input device in conjunction with the reviewer object identification application 1024, and the reviewer object identification application 1024 may determine the object label parameters based on the dimensions of the drawn object label and other aspects of the given image, such as the X-coordinate pixel and the Y-coordinate pixel derived from the given image's resolution.



FIG. 11 shows the raw camera image 402 of FIG. 4 with three rectangular object labels 1102-1106 applied to three different objects. FIG. 12 shows the laser point cloud image 502 of FIG. 5 with two rectangular object labels 1202-1204 applied to two different objects. FIG. 13 shows the raw camera image 602 of FIG. 6 with one object label 1302 applied to a single object. FIG. 14 shows the laser point cloud image 702 of FIG. 7 with one object label 1402 applied to a single object. FIG. 15 shows the raw camera image 802 with three rectangular object labels 1502-1506 applied to three different objects. FIG. 16 shows the laser point cloud image 802 of FIG. 8 with thirteen object labels 1602-1626 applied to thirteen different objects.


With the object label parameters 1026-1030 from the reviewers and the object label parameters 1018-1022 from the autonomous vehicle 104, the object identification server 132 may then proceed to optimizing the object detection parameters 124-128 used by the object detector 130 of the autonomous vehicle 104. To this end, the object identification server 132 may include an object identification analyzer 1032 executable by the processor 1004 for performing the optimization.


In optimizing the object detection parameters 124-128, the object identification analyzer 1032 may first determine whether an optimization operation should be performed. To make this determination, the object identification analyzer 1032 may compare the object labels applied by the autonomous vehicle 104 with the object labels applied by the one or more reviewers. Comparing the object labels of the autonomous vehicle 104 with the object labels applied by the one or more reviewers may include comparing the object label parameters 1018-1022 received from the autonomous vehicle 104 with the corresponding type of object label parameters 1026-1030 derived from the object labels applied by the one or more reviewers.


In comparing the object label parameters 1018-1022 with the object label parameters 1026-1030, the object identification analyzer 1032 may compare corresponding types of object labels. That is, the object identification analyzer 1032 may compare the object label parameters 1018 with the object label parameters 1026 for the raw camera images, the object label parameters 1020 with the object label parameters 1028 for the laser point cloud images, and the object label parameters 1022 with the object label parameters 1030 of the radar intensity images. Alternatively, the object identification analyzer 1032 may compare object label parameters of different types.


In one embodiment, the object identification analyzer 1032 may compare, for each image, the number of object labels applied by a reviewer with the number of object labels applied by the autonomous vehicle 104. In this embodiment, the object identification analyzer 1032 may determine whether the autonomous vehicle 104 detected the same number of objects as identified by a reviewer. A predetermined “missed object” threshold may be established for the object identification analyzer 1032 that establishes the number of permissible objects, as an absolute number or percentage, that the autonomous vehicle 104 is allowed to not detect. In this embodiment, should the autonomous vehicle 104 not detect a given percentage or number of objects (e.g., the autonomous vehicle 104 did not detect 3% of the objects identified by a reviewer), the object identification analyzer 1032 may display to the reviewer the percentage of objects not detected by the autonomous vehicle 104. The object identification analyzer 1032 may then recommend optimization of the object detection parameters 124-128. The object identification analyzer 1032 may also display the results of this analysis to the reviewer. Alternatively, the object identification analyzer 1032 may automatically proceed to the optimization of the object detection parameters 124-128.


In another embodiment of comparing the autonomous vehicle object label parameters 1018-1022 with the reviewer object label parameters 1026-1030, the object identification analyzer 1032 may determine whether any of the object labels applied by the autonomous vehicle overlap with any of the object labels applied by the reviewer. In this embodiment, the object identification analyzer 1032 may determine, not only whether the autonomous vehicle 104 detected the same number of objects as a reviewer, but whether the autonomous vehicle 104 detected the same objects. This comparison may also indicate how accurately the autonomous vehicle 104 detected an object.


To determine the accuracy of the object labels applied by the autonomous vehicle 104, the object identification analyzer 1032 may determine an object label ratio defined as:







intersection



(


object



label

autonomous


vehicle



,

object



label
reviewer



)



union



(


object



label

autonomous


vehicle



,

object



label
reviewer



)







where “intersection(object labelautonomous vehicle, object labelreviewer)” is the area of the intersection of the object label applied by the autonomous vehicle 104 and the object label applied by the reviewer, and “union(object labelautonomous vehicle, object labelreviewer)” is the area of the union of the object label applied by the autonomous vehicle 104 with the object label applied by the reviewer. Where there is a direct correlation (e.g., a direct overlap) between the object label applied by the autonomous vehicle 104 and the object label applied by the reviewer, the object label ratio may have a value of 0.5. With an imperfect correlation, the object label ratio may have a value less than 0.5. Each object label in each image may be associated with an object label ratio. The object identification analyzer 1032 may also assign an object label ratio to objects that the autonomous vehicle 104 did not detect or an object label ratio to objects that the autonomous vehicle 104 incorrectly detected (e.g., the autonomous vehicle 104 detected an object that a reviewer did not identify).


Using the object label ratios, where each object label in each image for a given set of images has an object label ratio, the object identification analyzer 1032 may determine the mean object label ratio value. Thus, the set of raw camera images 1012 may have a mean object label ratio value, the set of raw laser point cloud images 1014 may have a mean object label ratio value, and the set of raw radar intensity images 1016 may have a mean object label ratio value. Similar to the predetermined “missed object” threshold, the object identification analyzer 1032 may be configured with an object label ratio threshold (e.g., 0.35%) for each image type, and the mean object label ratio for a given set of images may be compared with the object label ratio threshold.


Since the different types of sensors may detect objects differently, each image type may be associated with a different value for the object label ratio threshold. For example, the set of raw camera images 1012 may be associated with an object label ratio threshold of 0.4%, the set of raw laser point cloud images 1014 may be associated with an object label ratio threshold of 0.35%, and the set of raw radar intensity images 1016 may be associated with an object label ratio of 0.37%. Of course, the set of raw images 1012-1016 may also be associated with the same value for the object label ratio threshold.


In this embodiment, where the mean object label ratio for a given set of images does not meet (or exceeds) the object label ratio threshold associated with the set of images, the object identification analyzer 1032 may display to the reviewer a level of inaccuracy in detecting objects by the autonomous vehicle 104. The object identification analyzer 1032 may then recommend optimization of the object detection parameters 124-128. The object identification analyzer 1032 may also display the results of this analysis to the reviewer. Alternatively, the object identification analyzer 1032 may automatically proceed to the optimization of the object detection parameters 124-128.


In yet a third embodiment, the object identification analyzer 1032 may compare a computed speed of an object labeled by the autonomous vehicle 104 with a computed speed of an object labeled by a reviewer. The object identification analyzer 1032 may compute the speed of an object by determining the distance an object travels in a series of one or more images (e.g., since the object identification analyzer 1032 may be configured with or derive the rate at which the images were captured). The object identification analyzer 1032 may then determine the differences in speed between objects detected by the autonomous vehicle 104 and the corresponding objects identified by the reviewer. The object identification analyzer 1032 may then recommend optimization of the object detection parameters 124-128 based on the number and value of the determined speed differences. The object identification analyzer 1032 may also display the results of this analysis to the reviewer. Alternatively, the object identification analyzer 1032 may automatically proceed to the optimization of the object detection parameters 124-128.


The object identification server 132 may use one or more optimization techniques to optimize the various object detection parameters 124-128. In one embodiment, the object identification server 132 performs the detection of objects with the instructions used by the object detector 130 using the possible combinations of values of the object detection parameters 124-128. For example, suppose that the object detection parameters 124 for raw camera images include ten parameters, and each parameter may have one of ten values. In this example, the object identification server 132 may perform the object detection analysis 1010 times, and for each performance of the object detection analysis, the object identification server 132 may store a separate set of object labels (e.g., object label parameters). Thus, in this example, the object detection analysis may result in 1010 different sets of object label parameters. The object identification server 132 may perform this object detection analysis for each sensor type, for each sensor, or for combinations thereof.


Having performed the object detection analysis with the possible combination of values of the object detection parameters, the object identification server 132 may then invoke the object identification analyzer 1032 for each set of object label parameters. Thus, using the example above, the object identification analyzer 1032 may perform 1010 analyses with one or more of the comparison embodiments previously discussed (e.g., the “missed object” analysis, the object label ratio analysis, and/or the object speed difference analysis).


The object identification server 132 may then select or display the combination of values for the object detection parameters that resulted in the most favorable outcome for each analysis. For example, for the “missed object” analysis, the object identification server 132 may display the set of values for the object detection parameters that resulted in the least number of objects that were not detected or incorrectly detected. As another example, for the object label ratio analysis, the object identification server 132 may display the set of values for the object detection parameters that resulted in the mean object label ratio closest to (or farthest from) the mean object label ratio threshold. Although the number of analyses may exponentially increase with each object detection parameter, the commercial availability of high-performance processors has made this optimization technique a practical reality. The autonomous vehicle 104 may then be configured with the optimized values for the selected set of object detection parameters.



FIG. 17 illustrates one example of logic flow 1702 for optimizing one or more sets of object detection parameters 124-128 of the autonomous vehicle 104. As previously discussed, the autonomous vehicle 104 may capture one or more images 118-122 using one or more sensors 112-116. The autonomous vehicle 104 may then detect objects in the one or more captured images 118-122. In detecting these objects, the autonomous vehicle 104 may determine object label parameters for each of the detected objects in each of the images. The object label parameters 124-128 and the captured images 118-122 may then be communicated to the object identification server 132 (Block 1704).


The object identification server 132 may then display each of the captured images to a reviewer for identifying objects. In response, the object identification server 132 may receive object labels applied to the objects identified by the reviewers (Block 1706). As with the object label parameters determined by the autonomous vehicle 104, the object labels applied by the one or more reviewers may be stored as object label parameters. In certain cases, the object identification server 132 may apply the object labels to objects (e.g., during interpolation when a reviewer has identified an object in a first image and the object in a last image).


The object identification server 132 may then determine whether to perform optimization on one or more sets of the object detection parameters 124-128 of the autonomous vehicle 104 (Block 1708). The object identification 132 may make this determination using one or more comparison analyses previously discussed (e.g., the “missed object” analysis, the mean object label ratio analysis, and/or the object speed difference analysis).


Depending on the results of the one or more analyses, the object identification server 132 may recommend optimization of one or more sets of object detection parameters 124-128. Alternatively, the object identification server 132 may automatically perform the optimization. Where the object identifications server 132 recommends the optimization of one or more sets of object detection parameters 124-128, and receives instructions to perform the optimization, the object identification server 132 may then perform the optimization of the one or more object detection parameters as previously discussed (Block 1710). The results of the optimization may then be displayed or incorporated into the autonomous vehicle 104 (Block 1712).


In this manner, the object identification server 132 facilitates the optimization of various object detection parameters used by the autonomous vehicle 104. To increase the accuracy of the objects detected by the autonomous vehicle 104, the object identification server 132 may leverage input provided by reviewers. The input may be the identification of objects in the raw images captured by the autonomous vehicle 104. Since the sensors on the autonomous vehicle 104 may be different sensor types, the object identification server 132 may leverage different comparison schemes in comparing the object labels of the reviewers with the object labels of the autonomous vehicle. The results of the comparison inform the object identification server 132 whether to recommend optimizing the object detection parameters. Moreover, leveraging various comparison schemes increases the likelihood that the recommendation by the object identification server 132 is a correct and valid recommendation. When performed, the optimization of the object detection parameters by the object identification server increases the likelihood that the autonomous vehicle will more accurately detect an object in a captured image. Increases in the accuracy of detecting objects yields such benefits as a safer driving experience, increased response times, increased predicted behavior responses, and other such benefits.


Although aspects of this disclosure have been described with reference to particular embodiments, it is to be understood that these embodiments are merely illustrative of the principles and applications of the present disclosure. It is therefore to be understood that numerous modifications may be made to the illustrative embodiments and that other arrangements may be devised without departing from the spirit and scope of this disclosure as defined by the appended claims. Furthermore, while certain operations and functions are shown in a specific order, they may be performed in a different order unless it is expressly stated otherwise.

Claims
  • 1. A system for optimizing one or more object detection parameters used by an object detector of an autonomous vehicle, the system comprising: a memory configured to store raw sensor images of different types;a reviewer object identification application configured to generate a plurality of object label parameters for ones of the raw sensor images, and store the plurality of object label parameters in the memory; anda processor configured to execute the reviewer object identification application, wherein the reviewer object identification application is configured to be used by a reviewer to identify objects appearing in the raw sensor images and compare object labels applied to the identified objects with object labels applied by the object detector of the autonomous vehicle in order to optimize the one or more object detection parameters.
  • 2. The system of claim 1, wherein the raw sensor images correspond to, or are copies of, images stored in a memory of the autonomous vehicle.
  • 3. The system of claim 2, wherein the memory is further configured to store a plurality of object label parameters generated by the object detector.
  • 4. The system of claim 3, further comprising an object identification analyzer executable by the processor in order to optimize the one or more object detection parameters.
  • 5. The system of claim 3, wherein the reviewer that uses the reviewer object identification application includes a human reviewer or a computerized reviewer operative to communicate with the reviewer object identification application.
  • 6. The system of claim 3, wherein the raw sensor images include images captured by one or more cameras of the autonomous vehicle.
  • 7. The system of claim 6, wherein the memory is further configured to store object labels for the images captured by the one or more cameras of the autonomous vehicle.
  • 8. The, system of claim 3, wherein the raw sensor images include images captured by one or more lasers of the autonomous vehicle.
  • 9. The system of claim 8, wherein the memory is further configured to store object labels for the images captured by the one or more lasers of the autonomous vehicle.
  • 10. The system of claim 3, wherein the raw sensor images include images captured by one or more radars of the autonomous vehicle.
  • 11. The system of claim 10, wherein the memory is configured to store object labels for the images captured by the one or more radars of the autonomous vehicle.
  • 12. The system of claim 11, wherein the reviewer object identification application is further configured to display each of the raw sensor images to the reviewer so that the reviewer can draw an object label around an object that the autonomous vehicle recognizes or detects.
  • 13. The system of claim 12, wherein the object label is a bounding box.
  • 14. The system of claim 12, wherein the reviewer object identification application is further configured to provide the reviewer with a drop-down menu for selecting an object name.
  • 15. The system of claim 14, wherein the object name is one of a vehicle, a pedestrian or a bicycle.
  • 16. The system of claim 12, wherein the reviewer object identification application is further configured to store object label parameters that define the object label.
  • 17. The system of claim 16, wherein the object label parameters include one or more of a width parameter, a height parameter, an X-parameter, a Y-parameter, an image number parameter or a Z-parameter.
  • 18. The system of claim 1, wherein the raw sensor images include at least one of raw camera images, raw laser point cloud images or raw radar intensity images.
  • 19. A system for optimizing one or more object detection parameters, the system comprising: an autonomous vehicle comprising: one or more sensors configured to capture images;a first memory configured to store an object detector and the images captured by the one or more sensors, wherein the object detector is configured to use the one or more object detection parameters to detect an object in one or more of the captured images; anda first processor configured to control the one or more sensors;a second memory configured to store raw sensor images of different types that correspond to, or are copies of, the images captured by the one or more sensors;a reviewer object identification application configured to generate a plurality of object label parameters for ones of the raw sensor images, and store the plurality of object label parameters in the second memory; anda second processor configured to execute the reviewer object identification application, wherein the reviewer object identification application is configured to be used by a reviewer to identify objects appearing in the raw sensor images and compare object labels applied to the identified objects with object labels applied by the object detector in order to optimize the one or more object detection parameters.
  • 20. The system of claim 19, wherein the second memory is further configured to store a plurality of object label parameters generated by the object detector.
  • 21. The system of claim 19, wherein the reviewer that uses the reviewer object identification application includes a human reviewer or a computerized reviewer operative to communicate with the reviewer object identification application.
  • 22. The system of claim 19, wherein the raw sensor images include at least one of raw camera images, raw laser point cloud images or raw radar intensity images.
CROSS-REFERENCE TO RELATED APPLICATIONS

The present application is a continuation of U.S. patent application Ser. No. 16/692,643, filed Nov. 22, 2019, which is a continuation of U.S. patent application Ser. No. 16/209,429, filed Feb. 27, 2019, now issued as U.S. Pat. No. 10,572,717, which is a continuation of U.S. patent application Ser. No. 15,874,130, filed Jan. 18, 2018, now issued as U.S. Pat. No. 10,198,619, which is a divisional of U.S. patent application Ser. No. 15/587,680, filed on May 5, 2017, now issued as U.S. Pat. No. 9,911,030, which is a continuation of U.S. patent application Ser. No. 14/792,995, filed Jul. 7, 2015, now issued as U.S. Pat. No. 9,679,191, which is a continuation of U.S. patent application Ser. No. 13/200,958, filed Oct. 5, 2011, now issued as U.S. Pat. No. 9,122,948, which claims the benefit of the filing date of U.S. Provisional Application No. 61/390,094 filed Oct. 5, 2010, and U.S. Provisional Application No. 61/391,271 filed Oct. 8, 2010, the disclosures of which are hereby incorporated herein by reference.

US Referenced Citations (264)
Number Name Date Kind
1924984 Fageol Aug 1933 A
3186508 Lamont Jun 1965 A
3324805 Mulch Jun 1967 A
3411139 Lynch et al. Nov 1968 A
3596728 Neville Aug 1971 A
4372414 Anderson et al. Feb 1983 A
4387783 Carman Jun 1983 A
4656834 Elpern Apr 1987 A
4924795 Ottemann May 1990 A
4970653 Kenue Nov 1990 A
4982072 Takigami Jan 1991 A
5187666 Watanabe Feb 1993 A
5415468 Latarnik May 1995 A
5448487 Arai Sep 1995 A
5470134 Toepfer et al. Nov 1995 A
5521579 Bernhard May 1996 A
5684696 Rao et al. Nov 1997 A
5774069 Tanaka et al. Jun 1998 A
5790403 Nakayama Aug 1998 A
5906645 Kagawa et al. May 1999 A
5913376 Takei Jul 1999 A
5954781 Slepian et al. Sep 1999 A
6055042 Sarangapani Apr 2000 A
6064926 Sarangapani et al. May 2000 A
6070682 Isogai et al. Jun 2000 A
6151539 Bergholz et al. Nov 2000 A
6195610 Kaneko Feb 2001 B1
6226570 Hahn May 2001 B1
6321147 Takeda et al. Nov 2001 B1
6332354 Lalor et al. Dec 2001 B1
6343247 Jitsukata et al. Jan 2002 B2
6385539 Wilson et al. May 2002 B1
6414635 Stewart et al. Jul 2002 B1
6438472 Tano et al. Aug 2002 B1
6438491 Farmer Aug 2002 B1
6453056 Laumeyer et al. Sep 2002 B2
6470874 Mertes Oct 2002 B1
6504259 Kuroda et al. Jan 2003 B1
6516262 Takenaga et al. Feb 2003 B2
6560529 Janssen May 2003 B1
6591172 Oda et al. Jul 2003 B2
6606557 Kotzin Aug 2003 B2
6643576 O'Connor et al. Nov 2003 B1
6832156 Farmer Dec 2004 B2
6836719 Andersson et al. Dec 2004 B2
6847869 Dewberry et al. Jan 2005 B2
6862524 Nagda Mar 2005 B1
6876908 Cramer et al. Apr 2005 B2
6934613 Yun Aug 2005 B2
6963657 Nishigaki et al. Nov 2005 B1
7011186 Frentz et al. Mar 2006 B2
7031829 Nisiyama Apr 2006 B2
7085633 Nishira et al. Aug 2006 B2
7102496 Ernst, Jr. et al. Sep 2006 B1
7177760 Kudo Feb 2007 B2
7194347 Harumoto et al. Mar 2007 B2
7207304 Lwatsuki et al. Apr 2007 B2
7233861 Van Buer et al. Jun 2007 B2
7327242 Holloway et al. Feb 2008 B2
7340332 Underdahl et al. Mar 2008 B2
7346439 Bodin Mar 2008 B2
7366325 Fujimura Apr 2008 B2
7373237 Wagner et al. May 2008 B2
7394046 Olsson et al. Jul 2008 B2
7486802 Hougen Feb 2009 B2
7499774 Barrett et al. Mar 2009 B2
7499776 Allard et al. Mar 2009 B2
7499804 Svendsen et al. Mar 2009 B2
7515101 Bhogal et al. Apr 2009 B1
7565241 Tauchi Jul 2009 B2
7579942 Kalik Aug 2009 B2
7656280 Hines et al. Feb 2010 B2
7694555 Howell et al. Apr 2010 B2
7778759 Tange et al. Aug 2010 B2
7818124 Herbst et al. Oct 2010 B2
7835859 Bill Nov 2010 B2
7865277 Larson et al. Jan 2011 B1
7894951 Norris et al. Feb 2011 B2
7908040 Howard et al. Mar 2011 B2
7956730 White et al. Jun 2011 B2
7979175 Allard et al. Jul 2011 B2
8024102 Swoboda et al. Sep 2011 B2
8050863 Trepagnier et al. Nov 2011 B2
8078349 Prada Gomez et al. Dec 2011 B1
8095313 Blackburn Jan 2012 B1
8099213 Zhang et al. Jan 2012 B2
8126642 Trepagnier et al. Feb 2012 B2
8190322 Lin et al. May 2012 B2
8194927 Zhang et al. Jun 2012 B2
8195341 Huang et al. Jun 2012 B2
8244408 Lee et al. Aug 2012 B2
8244458 Blackburn Aug 2012 B1
8260515 Huang et al. Sep 2012 B2
8280601 Huang et al. Oct 2012 B2
8280623 Trepagnier et al. Oct 2012 B2
8311274 Bergmann et al. Nov 2012 B2
8352111 Mudalige Jan 2013 B2
8352112 Mudalige Jan 2013 B2
8412449 Trepagnier et al. Apr 2013 B2
8452506 Groult May 2013 B2
8457827 Ferguson et al. Jun 2013 B1
8634980 Urmson et al. Jan 2014 B1
8694236 Takagi Apr 2014 B2
8706394 Trepagnier et al. Apr 2014 B2
8718861 Montemerlo et al. May 2014 B1
8724093 Sakai et al. May 2014 B2
8775063 Zeng Jul 2014 B2
8831813 Ferguson et al. Sep 2014 B1
8855860 Isaji et al. Oct 2014 B2
8874267 Dolgov et al. Oct 2014 B1
8918277 Niem et al. Dec 2014 B2
8929604 Platonov et al. Jan 2015 B2
8948954 Ferguson et al. Feb 2015 B1
8949016 Ferguson et al. Feb 2015 B1
8970397 Nitanda et al. Mar 2015 B2
8972093 Joshi Mar 2015 B2
9008369 Schofield et al. Apr 2015 B2
9062979 Ferguson et al. Jun 2015 B1
9063548 Ferguson et al. Jun 2015 B1
9081383 Montemerlo et al. Jul 2015 B1
9182759 Wimmer et al. Nov 2015 B2
20010024095 Fitzgibbon et al. Sep 2001 A1
20010037927 Nagler et al. Nov 2001 A1
20020188499 Jenkins et al. Dec 2002 A1
20030014302 Jablin Jan 2003 A1
20030016804 Sheha et al. Jan 2003 A1
20030037977 Tatara et al. Feb 2003 A1
20030055554 Shioda et al. Mar 2003 A1
20030093209 Andersson et al. May 2003 A1
20030125963 Haken Jul 2003 A1
20040243292 Roy Dec 2004 A1
20050012589 Kokubu et al. Jan 2005 A1
20050099146 Nishikawa et al. May 2005 A1
20050125154 Kawasaki Jun 2005 A1
20050131645 Panopoulos Jun 2005 A1
20050149251 Donath et al. Jul 2005 A1
20050273251 Nix et al. Dec 2005 A1
20060037573 Iwatsuki et al. Feb 2006 A1
20060082437 Yuhara Apr 2006 A1
20060089764 Filippov et al. Apr 2006 A1
20060089765 Pack et al. Apr 2006 A1
20060089800 Svendsen et al. Apr 2006 A1
20060116801 Shirley et al. Jun 2006 A1
20060173841 Bill Aug 2006 A1
20060178240 Hansel Aug 2006 A1
20060276942 Anderson et al. Dec 2006 A1
20070010942 Bill Jan 2007 A1
20070024501 Yeh Feb 2007 A1
20070112477 Van et al. May 2007 A1
20070142992 Gronau et al. Jun 2007 A1
20070149214 Walsh et al. Jun 2007 A1
20070165910 Nagaoka et al. Jul 2007 A1
20070193798 Allard et al. Aug 2007 A1
20070203617 Haug Aug 2007 A1
20070225909 Sakano Sep 2007 A1
20070239331 Kaplan Oct 2007 A1
20070247281 Shimomura Oct 2007 A1
20070279250 Kume et al. Dec 2007 A1
20080021628 Tryon Jan 2008 A1
20080033615 Khajepour et al. Feb 2008 A1
20080039991 May et al. Feb 2008 A1
20080040039 Takagi Feb 2008 A1
20080056535 Bergmann et al. Mar 2008 A1
20080059015 Whittaker et al. Mar 2008 A1
20080059048 Kessler et al. Mar 2008 A1
20080084283 Kalik Apr 2008 A1
20080089556 Salgian et al. Apr 2008 A1
20080120025 Naitou et al. May 2008 A1
20080120171 Ikeuchi et al. May 2008 A1
20080147253 Breed Jun 2008 A1
20080161987 Breed Jul 2008 A1
20080167771 Whittaker et al. Jul 2008 A1
20080183512 Benzinger et al. Jul 2008 A1
20080188246 Sheha et al. Aug 2008 A1
20080195268 Sapilewski et al. Aug 2008 A1
20080277183 Huang et al. Nov 2008 A1
20080303696 Aso et al. Dec 2008 A1
20080306969 Mehta et al. Dec 2008 A1
20090005959 Bargman et al. Jan 2009 A1
20090010494 Bechtel et al. Jan 2009 A1
20090074249 Moed et al. Mar 2009 A1
20090082879 Dooley et al. Mar 2009 A1
20090115594 Han May 2009 A1
20090164071 Takeda Jun 2009 A1
20090198400 Allard et al. Aug 2009 A1
20090248231 Kamiya Oct 2009 A1
20090276154 Subramanian et al. Nov 2009 A1
20090287367 Salinger Nov 2009 A1
20090287368 Bonne Nov 2009 A1
20090306834 Hjelm et al. Dec 2009 A1
20090313077 Wheeler, IV Dec 2009 A1
20090313095 Hurpin Dec 2009 A1
20090319096 Offer et al. Dec 2009 A1
20090319112 Fregene et al. Dec 2009 A1
20090322872 Muehlmann et al. Dec 2009 A1
20090326799 Crook Dec 2009 A1
20100010699 Taguchi et al. Jan 2010 A1
20100014714 Zhang et al. Jan 2010 A1
20100017056 Asakura et al. Jan 2010 A1
20100042282 Taguchi et al. Feb 2010 A1
20100052945 Breed Mar 2010 A1
20100066587 Yamauchi et al. Mar 2010 A1
20100076640 Maekawa et al. Mar 2010 A1
20100079590 Kuehnle et al. Apr 2010 A1
20100097457 Zhang Apr 2010 A1
20100179715 Puddy Jul 2010 A1
20100179720 Lin et al. Jul 2010 A1
20100191433 Groult Jul 2010 A1
20100198491 Mays Aug 2010 A1
20100205132 Taguchi et al. Aug 2010 A1
20100207787 Catten et al. Aug 2010 A1
20100228419 Lee et al. Sep 2010 A1
20100241297 Aoki et al. Sep 2010 A1
20100253542 Seder et al. Oct 2010 A1
20100256836 Mudalige Oct 2010 A1
20100265354 Kameyama Oct 2010 A1
20110010131 Miyajima et al. Jan 2011 A1
20110040481 Trombley et al. Feb 2011 A1
20110071718 Norris et al. Mar 2011 A1
20110099040 Felt et al. Apr 2011 A1
20110137520 Rector et al. Jun 2011 A1
20110150348 Anderson Jun 2011 A1
20110206273 Plagemann et al. Aug 2011 A1
20110210866 David et al. Sep 2011 A1
20110213511 Visconti et al. Sep 2011 A1
20110239146 Dutta et al. Sep 2011 A1
20110246156 Zecha et al. Oct 2011 A1
20110254655 Maalouf et al. Oct 2011 A1
20110264317 Druenert et al. Oct 2011 A1
20120053775 Nettleton et al. Mar 2012 A1
20120069185 Stein Mar 2012 A1
20120083960 Zhu et al. Apr 2012 A1
20120114178 Platonov et al. May 2012 A1
20120157052 Quade Jun 2012 A1
20120271483 Samukawa et al. Oct 2012 A1
20120277947 Boehringer et al. Nov 2012 A1
20120283912 Lee et al. Nov 2012 A1
20120314070 Zhang et al. Dec 2012 A1
20130035821 Bonne et al. Feb 2013 A1
20130054049 Uno Feb 2013 A1
20130054106 Schmudderich et al. Feb 2013 A1
20130054128 Moshchuk et al. Feb 2013 A1
20130144520 Ricci Jun 2013 A1
20130179382 Fritsch et al. Jul 2013 A1
20130282277 Rubin et al. Oct 2013 A1
20130321400 Van Os et al. Dec 2013 A1
20130321422 Pahwa et al. Dec 2013 A1
20140067187 Ferguson et al. Mar 2014 A1
20140088855 Ferguson Mar 2014 A1
20140156164 Schuberth et al. Jun 2014 A1
20140195138 Stelzig et al. Jul 2014 A1
20140214255 Dolgov et al. Jul 2014 A1
20140350836 Stettner et al. Nov 2014 A1
20140369168 Max et al. Dec 2014 A1
20150112571 Schmudderich Apr 2015 A1
20150153735 Clarke et al. Jun 2015 A1
20150177007 Su et al. Jun 2015 A1
20150198951 Thor et al. Jul 2015 A1
20150203107 Lippman Jul 2015 A1
20150293216 O'Dea et al. Oct 2015 A1
20150302751 Strauss et al. Oct 2015 A1
20160327947 Ishikawa et al. Nov 2016 A1
20160334230 Ross et al. Nov 2016 A1
20160334797 Ross et al. Nov 2016 A1
Foreign Referenced Citations (63)
Number Date Country
101073018 Nov 2007 CN
101364111 Feb 2009 CN
101522493 Sep 2009 CN
10218010 Nov 2003 DE
10336986 Mar 2005 DE
102009010006 Oct 2009 DE
102008023380 Nov 2009 DE
0884666 Dec 1998 EP
2042405 Apr 2009 EP
2216225 Aug 2010 EP
2692064 Dec 1993 FR
H05246635 Sep 1993 JP
H08110998 Apr 1996 JP
H0966853 Mar 1997 JP
09160643 Jun 1997 JP
H09161196 Jun 1997 JP
H09166209 Jun 1997 JP
H1139598 Feb 1999 JP
11282530 Oct 1999 JP
2000149188 May 2000 JP
2000193471 Jul 2000 JP
2000305625 Nov 2000 JP
2000338008 Dec 2000 JP
2001101599 Apr 2001 JP
2002236993 Aug 2002 JP
2002251690 Sep 2002 JP
2003081039 Mar 2003 JP
2003162799 Jun 2003 JP
2003205804 Jul 2003 JP
2004206510 Jul 2004 JP
2004326730 Nov 2004 JP
2004345862 Dec 2004 JP
2005062912 Mar 2005 JP
2005067483 Mar 2005 JP
2005071114 Mar 2005 JP
2005297621 Oct 2005 JP
2005339181 Dec 2005 JP
2006264530 Oct 2006 JP
2006322752 Nov 2006 JP
2007001475 Jan 2007 JP
2007022135 Feb 2007 JP
2007331458 Dec 2007 JP
2008087545 Apr 2008 JP
2008117082 May 2008 JP
2008152655 Jul 2008 JP
2008170404 Jul 2008 JP
2008213581 Sep 2008 JP
2008257652 Oct 2008 JP
2008290680 Dec 2008 JP
2009026321 Feb 2009 JP
2009053925 Mar 2009 JP
2009075638 Apr 2009 JP
2010128637 Jun 2010 JP
2010173530 Aug 2010 JP
2010182207 Aug 2010 JP
2010191803 Sep 2010 JP
0070941 Nov 2000 WO
2001088827 Nov 2001 WO
2005013235 Feb 2005 WO
2007145564 Dec 2007 WO
2009028558 Mar 2009 WO
2009155228 Dec 2009 WO
2011021046 Feb 2011 WO
Non-Patent Literature Citations (26)
Entry
“Extended European Search Report received for European Patent Application No. 11831362.6, dated Mar. 14, 2017”, 11 pages.
“Extended European Search Report received for European Patent Application No. 11831503.5, dated Dec. 3, 2015”, 14 pages.
“Extended European Search Report received for European Patent Application No. 11831505.0, dated Apr. 7, 2017”, 13 pages.
“Extended European Search Report received for European Patent Application No. 17151573.7, dated Apr. 19, 2017”, 7 pages.
“Fact Sheet: Beyond Traffic Signals: A Paradigm Shift Intersection Control For Autonomous Vehicles”, Available online at: <http://www.fhwa.dot.gov/advancedresearch/pubs/10023/index.cfm>, Accessed on Apr. 27, 2011, 3 pages.
“International Search Report and Written Opinion received for PCT Patent Application No. PCT/US2011/054154, dated Apr. 24, 2012”, 9 pages.
“International Search Report and Written Opinion received for PCT Patent Application No. PCT/US2011/054896, dated Apr. 25, 2012”, 8 pages.
“International Search Report and Written Opinion received for PCT Patent Application No. PCT/US2011/054899, dated May 4, 2012”, 8 pages.
“International Search Report and Written Opinion received for PCT Patent Application No. PCT/US2013/061604, dated Jul. 3, 2014”, 10 pages.
“Notice of Preliminary Rejection received for Korean Patent Application No. 10-2013-7011655, dated May 18, 2017”, 14 pages (6 pages of English Translation and 8 pages of Official copy).
“Notice of Preliminary Rejection received for Korean Patent Application No. 10-2013-7011657, dated Feb. 1, 2016”, 10 pages (4 pages of English Translation and 6 pages of Official Copy).
“Notice of Reasons for Rejection received for Japanese Patent Application No. 2013-532909, dated May 26, 2016”, 6 pages (3 pages of English Translation and 3 pages of Official Copy).
“Notice of reasons for rejection received for Japanese Patent Application No. 2013-532909, dated Nov. 25, 2015”, 9 pages (5 pages of English Translation and 4 pages of Official Copy).
“Office Action received for Chinese Patent Application No. 201180057942.8, dated Jun. 3, 2015”, 21 pages (14 pages of English Translation and 7 pages of Official Copy).
“Office Action received for Chinese Patent Application No. 201180057954.0, dated Apr. 29, 2015”, 14 pages (8 pages of English Translation and 6 pages of Official Copy).
“Office Action received for Japanese Patent Application No. 2013-532908, dated Sep. 8, 2015”, 12 pages (6 pages of English Translation and 6 pages of Official Copy).
“Partial Supplementary European Search Report received for European Patent Application No. 11831505.0, dated Dec. 20, 2016”, 6 pages.
“TomTom GO user manual”, Available online at: <http://download.tomtom.com/open/manuals/device/refman/TomTom-GO-en-GB.pdf>, Accessed on Oct. 1, 2007, 100 pages.
Crane , et al., “Team Gator Nation's Autonomous Vehicle Development For The 2007 DARPA Urban Challenge”, Journal of Aerospace Computing, Information and Communication, vol. 4, Dec. 2007, pp. 1059-1085.
Di Leece , et al., “Experimental System To Support Real-Time Driving Pattern Recognition”, Advanced Intelligent Computing Theories and Applications, With Aspects of Artificial Intelligence, ICIC 2008, Lecture Notes in Computer Science, vol. 5227, Springer, Berlin, 2008, pp. 1192-1199.
Guizzo , “How's Google's Self-Driving Car Works”, Available online at: <http://spectrum.ieee.org/automaton/robotics/artificial-intelligence/how-google-self-driving-car-works>, IEEE, Oct. 18, 2011, pp. 1-31.
Jaffe, Eric , “The First Look at How Google's Self-Driving Car Handles City Streets”, The Atlantic City Lab, Apr. 28, 2014, 16 pages.
Markoff, John , “Google Cars Drive Themselves, in Traffic”, Available online at: <http://www.nytimes.com/2010/10/10/science/10google.html>, Oct. 9, 2010, 4 pages.
McNaughton , et al., “Motion Planning for Autonomous Driving with a Conformal Spatiotemporal Lattice”, IEEE, International Conference on Robotics and Automation, May 9-13, 2011, pp. 4889-4895.
Schonhof , et al., “Autonomous Detection And Anticipation of Jam Fronts From Messages Propagated By Intervehicle Communication”, Journal of the Transportation Research Board, vol. 1999, No. 1, Jan. 1, 2007, pp. 3-12.
Tiwari , et al., “Survival Analysis: Pedestrian Risk Exposure at Signalized Intersections”, Trans Research Part F: Traffic Psych and Behav, Pergamon, Amsterdam, vol. 10, No. 2, 2007, pp. 77-89.
Provisional Applications (2)
Number Date Country
61391271 Oct 2010 US
61390094 Oct 2010 US
Divisions (1)
Number Date Country
Parent 15587680 May 2017 US
Child 15874130 US
Continuations (5)
Number Date Country
Parent 16692643 Nov 2019 US
Child 17387199 US
Parent 16209429 Feb 2019 US
Child 16692643 US
Parent 15874130 Jan 2018 US
Child 16209429 US
Parent 14792995 Jul 2015 US
Child 15587680 US
Parent 13200958 Oct 2011 US
Child 14792995 US