The present disclosure provides systems and methods for providing assistance to users entering or exiting a vehicle using one or more of the vehicle's doors.
Vehicles may include one or more doors through which a user may enter or exit.
A method for assisting in the opening of a vehicle door, according to an example of this disclosure, includes obtaining a plurality of images of an object at the periphery of the vehicle using a camera on the vehicle, determining a distance between the object and a door of the vehicle based on the plurality of images, and initiating a vehicle response based on the distance.
In a further example of the foregoing, the distance is a distance between the object and an edge of the door farthest from a hinge of the door.
In a further example of any of the foregoing, the camera is disposed on or adjacent a side view mirror of the vehicle.
In a further example of any of the foregoing, the camera includes a fish eye lens.
In a further example of any of the foregoing, the determining step includes utilizing structure from motion to estimate the three-dimensional structure of the object based on the plurality of images.
In a further example of any of the foregoing, the plurality of images are obtained as the vehicle pulls into a parking spot.
In a further example of any of the foregoing, the determining step is performed at a controller on the vehicle.
In a further example of any of the foregoing, the vehicle response is an audiovisual response that indicates whether a collision between the door and the object is imminent.
In a further example of any of the foregoing, the vehicle response is an audible indication of the distance.
In a further example of any of the foregoing, the camera is located on one of the driver's side and passenger's side of the vehicle. The door is located on the one of the driver's side and passenger's side of the vehicle.
In a further example of any of the foregoing, the door is a front door.
In a further example of any of the foregoing, the door is a rear door.
In a further example of any of the foregoing, the determining step includes utilizing semantic segmentation based on the plurality of images, and the plurality of images are obtained as the vehicle pulls into a parking spot.
A system for assisting in the opening of a vehicle door according to an example of this disclosure includes a camera disposed on the vehicle for obtaining a plurality of images of an object at the periphery of the vehicle. The example system includes a controller which is configured to determine a distance between the object and a door of the vehicle based on the plurality of images and to initiate a vehicle response based on the distance.
In a further example of the foregoing, the vehicle response is an audiovisual response that indicates whether a collision between the door and the object is imminent.
In a further example of any of the foregoing, the vehicle response is an audible indication of the distance.
In a further example of any of the foregoing, the camera is disposed on or adjacent a side view mirror of the vehicle.
In a further example of any of the foregoing, the camera includes a fish eye lens.
In a further example of any of the foregoing, the controller is configured to utilize a structure from motion algorithm to estimate the three-dimensional structure of the object based on the plurality of images.
In a further example of any of the foregoing, the controller is configured to utilize a semantic segmentation algorithm to determine the distance.
These and other features may be best understood from the following specification and drawings, the following of which is a brief description.
This disclosure pertains to systems and methods for assisting the operation of one or more doors in a vehicle.
In some examples, the sensor 16 may include one or more of a camera, radar sensor, laser, LIDAR sensor and ultrasonic sensor. In some examples, the camera is a surround-view camera on the vehicle 14. Although one sensor 16 is shown in the schematic example of
In some examples, the controller 20 may be an electronic control unit (ECU) that may include one or more individual electronic control units that control one or more electronic systems or subsystems within the vehicle 14. The controller 20, in some examples, may include one or more computing devices, each having one or more of a computer processor, memory, storage means, network device and input and/or output devices and/or interfaces. The controller 20 may be programmed to implement one or more of the methods or processes described herein.
In some examples, the object 18 may include one or more of a vehicle, a wall, a barrier, a pillar, a construction object, and a garage object, such as a shelving unit or snowblower. In some examples, the object 18 is stationary.
In some examples, as shown, the object 118 is a parked vehicle or other object next to a vehicle parking spot 122. In some examples, as shown, the camera 116 is disposed on or adjacent a side view mirror 124 of the vehicle. Although the illustrative example shows one camera 116 at the driver side view mirror, a second similar camera may be disposed on or adjacent a passenger side view mirror in some examples. In some examples, a driver side camera 116 provides information related to the driver side door or doors 112, and a passenger side camera provides information related to the passenger side door or doors 112. In some examples, the camera 116 includes a fish eye lens, which may provide an ultra wide-angle view of the side of the vehicle 114. In some examples, the camera 116 is located under the side view mirror 124. In some examples, the camera 116 may be used in combination with one or more other sensors disclosed herein.
In some examples, as shown, the distance D is the minimum distance between the object 118 and the edge 126 of the door 112 spaced farthest away from the hinge 128 of the door 112. In some examples, the edge 126 would be the first portion of the door 112 to strike the object 118. In some examples, the controller 120 is programmed with information about the specifications of the doors 112, including dimensions and swing angles.
In some examples, the vehicle response is one or more of an audio, visual, or audiovisual response to indicate whether a collision between the door 112 and the object 118 is imminent. In some examples, the vehicle response is an audible and/or visual indication of the distance D. In some examples, the vehicle response is an audible and/or visual indication of an angle at which the door 112 may be opened without a collision. In some examples, the response is initiated before the door 112 is opened by an occupant, which may be a driver or passenger in some examples. In some examples, the response is initiated after the door 112 is opened by an occupant. In some examples, the response is initiated when the vehicle 114 is placed in “park” and/or when the vehicle is otherwise stationary.
In some examples, the camera 116 and controller 120 may continue to determine the distance D as the door 112 is being opened. In some examples, the vehicle response may be initiated when the distance D is below a threshold amount. In some examples, the vehicle response may change as the distance D decreases.
As shown in
Referring to
As shown in
As shown in
As shown in
In some examples, the controller 120 may be programmed to utilize a semantic segmentation algorithm to detect calculate the object 118 and/or drivable surface on each side of the vehicle 114. Semantic segmentation utilizes image frames camera frames are to recognize various classifications in the vehicle environment, such as the driving surface, cars, pedestrians, curbs and sidewalks, at the pixel level. Semantic segmentation utilizes neural network based detection for image classification at the pixel level. In some examples, semantic segmentation utilizes every pixel of an image within an object class, which may include a specific type of object 118 or the surface between the object 118 and the vehicle 114 in some examples.
In the drivable surface example, the amount of drivable surface may then be utilized to inform the distance between the vehicle 114 and the object 118.
In some examples, semantic segmentation utilizes a forward facing camera, a reverse-facing camera, and one or more side cameras. The sensing may be done one frame at a time. In some examples, semantic segmentation is done on the move or stationary.
As illustrated schematically in
In some examples, the determining step 204 includes utilizing structure from motion to estimate the three-dimensional structure of the object based on the plurality of images. In some examples, the plurality of images are obtained as the vehicle pulls into a parking spot.
In some examples, the systems and method disclosed give vehicle occupants an indication of the distance to collision between their opening doors and nearby objects.
Although the different examples are illustrated as having specific components, the examples of this disclosure are not limited to those particular combinations. It is possible to use some of the components or features from any of the examples in combination with features or components from any of the other examples.
The foregoing description shall be interpreted as illustrative and not in any limiting sense. A worker of ordinary skill in the art would understand that certain modifications could come within the scope of this disclosure. For these reasons, the following claims should be studied to determine the true scope and content of this disclosure.
Number | Name | Date | Kind |
---|---|---|---|
20040200149 | Dickmann | Oct 2004 | A1 |
20050280284 | McLain | Dec 2005 | A1 |
20100145617 | Okada | Jun 2010 | A1 |
20130113614 | Yopp | May 2013 | A1 |
20130234844 | Yopp | Sep 2013 | A1 |
20150084779 | Saladin | Mar 2015 | A1 |
20150227775 | Lewis | Aug 2015 | A1 |
20160055384 | Yoo | Feb 2016 | A1 |
20160208537 | Senguttuvan | Jul 2016 | A1 |
20160312517 | Elie | Oct 2016 | A1 |
20160355127 | Compton | Dec 2016 | A1 |
20170152698 | Bae | Jun 2017 | A1 |
20170241182 | Hung | Aug 2017 | A1 |
20170313247 | Hsu et al. | Nov 2017 | A1 |
20170371343 | Cohen et al. | Dec 2017 | A1 |
20180012085 | Blayvas | Jan 2018 | A1 |
20180336786 | Salter | Nov 2018 | A1 |
20190100950 | Aravkin | Apr 2019 | A1 |
20190202373 | Kubota | Jul 2019 | A1 |
20190205662 | Samal | Jul 2019 | A1 |
20190211587 | Ganeshan | Jul 2019 | A1 |
20190235520 | Parchami | Aug 2019 | A1 |
20200086852 | Krekel | Mar 2020 | A1 |
20200149329 | Miyashiro | May 2020 | A1 |
20200209401 | Motoyama | Jul 2020 | A1 |
20200284876 | Hurd | Sep 2020 | A1 |
20200329215 | Tsunashima | Oct 2020 | A1 |
20210061262 | Kniep | Mar 2021 | A1 |
Number | Date | Country |
---|---|---|
2579231 | Apr 2013 | EP |
Number | Date | Country | |
---|---|---|---|
20210178966 A1 | Jun 2021 | US |