This relates generally to camera-based object detection, and more particularly, to detecting objects using one or more cameras on a vehicle to prevent the opening of a door into the objects.
Modern vehicles, especially automobiles, increasingly include various sensors for detecting and gathering information about the vehicles' surroundings. These sensors may include ultrasonic sensors for detecting the proximity of a vehicle to objects in the vehicle's surroundings. However, ultrasonic sensors may have limited accuracy in certain situations, such as when the objects are relatively close to the sensors. Therefore, an alternative solution to object detection can be desirable.
Examples of the disclosure are directed to using one or more cameras to detect objects in proximity to a vehicle. In some examples, the vehicle determines whether those objects will interfere with the opening of one or more doors on the vehicle, and if they will, prevents the opening of the doors into those objects. In some examples, the cameras can be rear-facing or side-facing, and in some examples, the cameras can be mounted to the doors of the vehicle.
In the following description of examples, reference is made to the accompanying drawings which form a part hereof, and in which it is shown by way of illustration specific examples that can be practiced. It is to be understood that other examples can be used and structural changes can be made without departing from the scope of the disclosed examples.
Some vehicles, such as automobiles, may include ultrasonic sensors for detecting the proximity of the vehicles to objects in the vehicles' surroundings. However, ultrasonic sensors may have limited accuracy in certain situations, such as when the objects are relatively close to the sensors. Examples of the disclosure are directed to using one or more cameras to detect objects in proximity to a vehicle. In some examples, the vehicle determines whether those objects will interfere with the opening of one or more doors on the vehicle, and if they will, prevents the opening of the doors into those objects. In some examples, the cameras can be rear-facing or side-facing, and in some examples, the cameras can be mounted to the doors of the vehicle. Further, in some examples, the camera-based object detection of the disclosure can be used in conjunction with other sensors on the vehicle (e.g., ultrasonic sensors, radar sensors, etc.) to improve the accuracy of object detection. For example, object detection information from multiple systems can be processed and/or combined (e.g., averaged) by the vehicle to determine object location(s). In some examples, the vehicle can switch between different types of object-detection systems (e.g., camera, ultrasonic, radar) based on which type of system will likely provide the most accurate object detection results (e.g., due to weather conditions, vehicle speed, etc.).
It can be beneficial for one or more systems on vehicle 102 to determine whether an object will interfere with the opening of doors 104 and/or 106 (e.g., a full opening of doors 104 and/or 106). For example, if doors 104 and/or 106 are automatic doors (e.g., if the doors open, using motors, in response to a user command (e.g., pressing of a button or being within a certain proximity of the door or a door handle) to open the doors), and if an object will interfere with the opening of doors 104 and/or 106, vehicle 102 can determine to provide a warning to an operator of the vehicle about the interfering object, prevent the opening of doors 104 and/or 106, and/or only open doors 104 and/or 106 to the extent permitted by the interfering object (e.g., only open door 106 halfway). If doors 104 and/or 106 are manual doors (e.g., if the doors are opened by hand), vehicle 102 can simply allow doors 104 and/or 106 to open freely (i.e., the object-avoidance system can be fully disabled), determine to provide a warning to an operator of the vehicle about the interfering object (i.e., the object-avoidance system can be partially disabled), prevent the opening of doors 104 and/or 106 (e.g., by locking the doors or the door opening mechanism, or providing resistance against opening the doors), and/or only allow doors 104 and/or 106 to open to the extent permitted by the interfering object.
In some examples, vehicle 102 can determine whether an object will interfere with the opening of one or more of doors 104 and 106 by determining whether the object is within interaction spaces 105 and 107, respectively. For example, referring to
The vehicle of the disclosure can determine whether one or more objects will interfere with the opening of one or more doors by using one or more optical cameras, as will be discussed below. In particular, the examples of the disclosure need not be reliant on the availability of image data from two or more cameras, but rather can be implemented using image data from a single camera.
Camera 220 can be mounted on the front half of vehicle 202 (e.g., in front of door 204), and can be rear-facing. In some examples, camera 220 can be mounted on or near the right-hand A-Pillar of vehicle 202. Camera 220 can have field of view 221, which can encompass at least a portion of the interaction spaces of doors 204 and 206 (not shown). In some examples, field of view 221 can also encompass a portion of vehicle 202 (e.g., doors 204 and 206, hinges 208 and 210, door handles (not shown), wheel wells (not shown), any portion of the vehicle chassis, etc.). In some examples, camera 220 can replace a traditional side view mirror on vehicle 202, by displaying an image of what is included in field of view 221 on a display (e.g., LCD, OLED, etc.) in vehicle 202. In some examples, camera 220 can be used to perform blind-spot monitoring on the right side of vehicle 202.
Camera 220 can also be utilized to determine whether an object (e.g., object 212) will interfere with the opening of one or more of doors 204 and 206 (e.g., by determining whether the object is within the interaction spaces of doors 204 and/or 206, as previously described). In particular, vehicle 202 can, based on the movement of the vehicle (and thus camera 220) with respect to object 212, determine a relative location of the object with respect to the vehicle, as will be described in more detail below. Because vehicle 202 can know the interaction spaces of doors 204 and 206 before the above determination is made, the vehicle can compare the determined relative location of object 212 to the interaction spaces of doors 204 and 206 to determine whether the object is within one or more of those interaction spaces. In some examples, the interaction spaces of doors 204 and 206 can be stored in a memory in vehicle 202 prior to the above-described determinations being made (e.g., when vehicle 202 is manufactured, or during a calibration procedure performed after manufacture of the vehicle).
Vehicle 202 can use any suitable method(s) for determining the relative location of object 212 using camera 220, such as “depth from motion” (or “structure from motion”) algorithms. In depth from motion algorithms, the relative locations of one or more objects in the field of view of a single camera can be determined by comparing the positions of the objects, in images captured by the camera, at multiple instances of time between which the camera has moved with respect to the objects. Exemplary depth from motion algorithms are described in U.S. Pat. No. 8,837,811, entitled “Multi-stage linear structure from motion,” the contents of which is hereby incorporated by reference for all purposes. Because camera 220 can be mounted to vehicle 202, as previously described, a depth from motion map of the vehicle's surroundings visible to camera 220 can be determined while the vehicle is moving. For example, if vehicle 202 is moving with respect to object 212, a depth of motion map can be generated that indicates the position of object 212, from which the vehicle can determine whether object 212 will interfere with the opening of one or both of doors 204 and 206. Analogously to what is described above, vehicle 202 can utilize camera 218 to determine whether an object will interfere with the opening of one or both of doors 214 and 216.
In some examples, cameras 218 and 220 can be mounted on vehicle 202 at known locations on the vehicle (e.g., known positions, heights, orientations, etc.). Further, the speed of vehicle 202 can be known to the vehicle. Thus, when making the determinations about the location(s) of object(s) in vehicle's 202 surroundings, one or more of these known quantities can be used by the vehicle to increase the accuracy of the depth from motion maps as compared with situations where camera speed and/or location may not be known. In other words, one or more of the known location of the cameras and the known speed of the vehicle, for example, can be used to determine the location of objects relative to vehicle 202. Additionally, as discussed above, in some examples, a portion of vehicle 202 can be included in the fields of view of cameras 218 and/or 220. Thus, vehicle 202 can identify known reference points captured in the cameras' fields of view, and can use known dimensions corresponding to those reference points to more accurately determine the depth from motion maps (and thus the locations of objects relative to the vehicle). For example, hinge 210 and the rear-right wheel (not shown) of vehicle 202 can be in field of view 221 of camera 220. Further, vehicle 202 can know the distance between hinge 210 and the rear-right wheel (e.g., the distance can be stored in a memory). Using this reference distance, vehicle 202 can more accurately determine the location of object 212 with respect to the vehicle (and, for example, whether the object is within the interaction spaces of doors 204 and 206). Further, in some examples, vehicle 202 can utilize the known reference dimensions of various portions of the vehicle in field of view 221 as reference points for performing auto-calibration of the depth from motion determinations described in this disclosure.
In some examples, the depth from motion maps discussed above can be generated whenever vehicle 202 is in motion; when an input for opening a door is received (e.g., after the vehicle comes to a stop), the last-generated depth from motion map can be utilized to determine whether an object will prevent opening of any of doors 204, 206, 214 and 216. In other examples, the depth from motion maps can be determined only after vehicle 202 determines that it will likely come to a stop within a threshold amount of time (e.g., 1 second, 5 seconds, 10 seconds, etc.), which can be based on an assumption that the doors of the vehicle are unlikely to be opened while the vehicle is in motion. In this way, vehicle 202 may avoid generating depth from motion maps when traveling at relatively constant speeds on a freeway or a road, for example. In such examples, once vehicle 202 determines that it will likely come to a stop, generation of the depth from motion maps can be triggered such that the maps can be ready when the vehicle is stopped and a door is being—or, is requested to be—opened. In some examples, vehicle 202 can determine that it will likely come to a stop within a threshold amount of time if the vehicle slows down below a threshold speed (e.g., 5 mph, 10 mph, etc.) or is decelerating above a threshold deceleration rate from an initial detected speed of travel. In some examples, the depth from motion maps may be generated only after vehicle 202 actually does stop, at which time the vehicle can process past image data from cameras 218 and/or 220 (e.g., last 5 seconds, last 10 seconds, etc., worth of images, stored in a memory) to determine the depth from motion maps.
Camera 318 can be mounted on the right side of vehicle 302, such as on one of doors 304 and 306, or another structure of the vehicle, and can be side-facing. Camera 318 can have field of view 319, which can encompass at least a portion of the interaction spaces of doors 304 and 306 (not shown). In some examples, camera 318 can be used to identify a user (e.g., via biometric facial recognition) and provide access to vehicle 302 upon successful identification of the user and determination that the user is authorized to access the vehicle.
Camera 318 can also be utilized to determine whether an object (e.g., object 312) will interfere with the opening of one or more of doors 304 and 306 (e.g., by determining whether the object is within the interaction spaces of doors 304 and/or 306, as previously described). In particular, vehicle 302 can, based on the movement of the vehicle (and thus camera 318) with respect to object 312, determine a relative location of the object with respect to the vehicle using depth from motion techniques, as described above. Because vehicle 302 can know the interaction spaces of doors 304 and 306 before the above determination is made, the vehicle can compare the determined relative location of object 312 to the interaction spaces of doors 304 and 306 to determine whether the object is within one or more of those interaction spaces. In some examples, the interaction spaces of doors 304 and 306 can be stored in a memory in vehicle 302 prior to the above-described determinations being made (e.g., when vehicle 302 is manufactured, or during a calibration procedure performed after manufacture of the vehicle). Other details about the operation of camera 318 in determining whether objects will interfere with the opening of doors 304 and/or 306, such as how or when such determination are made, can be analogous to as discussed above with respect to
The depth from motion determinations of the examples of
In some examples, cameras 418 and 420 can be utilized similar to as described with reference to
In some examples, the vehicle control system 500 can be connected to (e.g., via controller 520) one or more actuator systems 530 in the vehicle. The one or more actuator systems 530 can include, but are not limited to, a motor 531 or engine 532, battery system 533, transmission gearing 534, suspension setup 535, brakes 536, steering system 537 and door system 538. Based on the determined locations of one or more objects relative to the interaction spaces of doors 538, the vehicle control system 500 can control one or more of these actuator systems 530 to prevent the opening of a door into one of the objects. This can be done by, for example, controlling operation of doors 538 as discussed in this disclosure. As another example, the vehicle control system 500 can move the vehicle, such that the door to be opened will be free to open, by adjusting the steering angle and engaging the drivetrain (e.g., motor) to move the vehicle at a controlled speed. The camera system 506 can continue to capture images and send them to the vehicle control system 500 for analysis, as detailed in the examples above. The vehicle control system 500 can, in turn, continuously or periodically send commands to the one or more actuator systems 530 to prevent the opening of a door into one of the objects.
Thus, the examples of the disclosure provide various ways to prevent the opening of a vehicle door into an object using one or more cameras.
Therefore, according to the above, some examples of the disclosure are directed to a system comprising: one or more processors; and a memory including instructions, which when executed by the one or more processors, cause the one or more processors to perform a method comprising: receiving image data from a first camera mounted on a vehicle, the vehicle including an element configured to open into a space external to the vehicle, and the image data indicative of a motion of the first camera with respect to an environment of the vehicle; and determining whether an object, external to the vehicle, will interfere with the opening of the element into the space, the determination based on the motion, indicated in the image data, of the first camera with respect to the environment of the vehicle. Additionally or alternatively to one or more of the examples disclosed above, in some examples, the element comprises a door configured to open into an interaction space external to the vehicle. Additionally or alternatively to one or more of the examples disclosed above, in some examples, determining whether the object will interfere with the opening of the element into the space comprises: generating a depth from motion map of the environment of the vehicle using the image data, the depth from motion map including the object; in accordance with a determination that the object, in the depth from motion map, is within the space into which the element is configured to open, determining that the object will interfere with the opening of the element into the space; and in accordance with a determination that the object, in the depth from motion map, is not within the space into which the element is configured to open, determining that the object will not interfere with the opening of the element into the space. Additionally or alternatively to one or more of the examples disclosed above, in some examples, generating the depth from motion map comprises: in accordance with a determination that the vehicle will likely stop within a time threshold, generating the depth from motion map; and in accordance with a determination that the vehicle will not likely stop within the time threshold, forgoing generating the depth from motion map. Additionally or alternatively to one or more of the examples disclosed above, in some examples, generating the depth from motion map comprises: in accordance with a determination that the vehicle is stopped, generating the depth from motion map using the image data from before the vehicle stopped; and in accordance with a determination that the vehicle is not stopped, forgoing generating the depth from motion map. Additionally or alternatively to one or more of the examples disclosed above, in some examples, the first camera is substantially rear-facing on the vehicle. Additionally or alternatively to one or more of the examples disclosed above, in some examples, the first camera is substantially side-facing on the vehicle. Additionally or alternatively to one or more of the examples disclosed above, in some examples, the first camera is mounted to the element, and the motion of the first camera with respect to the environment is due to the opening of the element, independent of a movement of the vehicle. Additionally or alternatively to one or more of the examples disclosed above, in some examples, the motion of the first camera with respect to the environment is due to a movement of the vehicle. Additionally or alternatively to one or more of the examples disclosed above, in some examples, the first camera is additionally used as a side-view mirror replacement. Additionally or alternatively to one or more of the examples disclosed above, in some examples, the first camera is additionally used for facial recognition to grant a user access to the vehicle. Additionally or alternatively to one or more of the examples disclosed above, in some examples, the method further comprises: in accordance with a determination that the object will interfere with the opening of the element into the space, preventing the opening of the element; and in accordance with a determination that the object will not interfere with the opening of the element into the space, allowing the opening of the element. Additionally or alternatively to one or more of the examples disclosed above, in some examples, the method further comprises: detecting user input for opening the element; in response to detecting the user input: in accordance with a determination that the object will interfere with the opening of the element into the space, partially opening the element into the space to the extent allowed by the object; and in accordance with a determination that the object will not interfere with the opening of the element into the space, fully opening the element into the space. Additionally or alternatively to one or more of the examples disclosed above, in some examples, the object is in the environment of the vehicle.
Some examples of the disclosure are directed to a non-transitory computer-readable medium including instructions, which when executed by one or more processors, cause the one or more processors to perform a method comprising: receiving image data from a first camera mounted on a vehicle, the vehicle including an element configured to open into a space external to the vehicle, and the image data indicative of a motion of the first camera with respect to an environment of the vehicle; and determining whether an object, external to the vehicle, will interfere with the opening of the element into the space, the determination based on the motion, indicated in the image data, of the first camera with respect to the environment of the vehicle.
Some examples of the disclosure are directed to a vehicle comprising: a first camera; one or more processors coupled to the first camera; a door actuator coupled to the one or more processors; an element configured to open, using the door actuator, into a space external to the vehicle; and a memory including instructions, which when executed by the one or more processors, cause the one or more processors to perform a method comprising: receiving image data from the first camera, the image data indicative of a motion of the first camera with respect to an environment of the vehicle; and determining whether an object, external to the vehicle, will interfere with the opening of the element into the space, the determination based on the motion, indicated in the image data, of the first camera with respect to the environment of the vehicle.
Although examples of this disclosure have been fully described with reference to the accompanying drawings, it is to be noted that various changes and modifications will become apparent to those skilled in the art. Such changes and modifications are to be understood as being included within the scope of examples of this disclosure as defined by the appended claims.
This application claims the benefit of U.S. Provisional Patent Application No. 62/272,583, filed on Dec. 29, 2015, the entire disclosure of which is incorporated herein by reference in its entirety for all intended purposes.
Number | Date | Country | |
---|---|---|---|
62272583 | Dec 2015 | US |