The present disclosure relates to a system and method for reverse perpendicular parking a vehicle.
Vehicles may include autonomous driving systems that include sensors for sensing objects external to the vehicle. These sensors (such as ultrasonic, RADAR, or LIDAR) may be expensive and/or inaccurate.
According to one embodiment, a method for parking a vehicle in a parking lot includes generating steering commands for the vehicle while in the lot based on an occupancy grid and plenoptic camera data. The occupancy grid indicates occupied areas and unoccupied areas around the vehicle and is derived from map data defining parking spots relative to a topological feature contained within the lot. The plenoptic camera data defines a plurality of depth maps and corresponding images that include the topological feature captured during movement of the vehicle. The steering command is generated such that the vehicle follows a reverse perpendicular path into one of the spots without entering an occupied area.
According to another embodiment, a vehicle includes a controller configured to generate steering commands for a vehicle in a parking lot. The steering commands are based on an occupancy grid indicating occupied and unoccupied areas around the vehicle and derived from map data defining parking spots relative to a topological feature of the lot, and plenoptic camera data defining depth maps and corresponding images including the topological feature such that the vehicle follows a reverse perpendicular path into one of the spots.
According to yet another embodiment, a method includes generating steering commands for a vehicle in a lot. The steering commands are based on an occupancy grid indicating occupied and unoccupied areas around the vehicle and derived from map data defining parking spots relative to a topological feature contained within the lot, and plenoptic camera data defining depth maps and corresponding images including the topological feature such that the vehicle follows a reverse perpendicular path into one of the spots without entering an occupied area.
Embodiments of the present disclosure are described herein. It is to be understood, however, that the disclosed embodiments are merely examples and other embodiments can take various and alternative forms. The figures are not necessarily to scale; some features could be exaggerated or minimized to show details of particular components. Therefore, specific structural and functional details disclosed herein are not to be interpreted as limiting, but merely as a representative basis for teaching one skilled in the art to variously employ the present invention. As those of ordinary skill in the art will understand, various features illustrated and described with reference to any one of the figures can be combined with features illustrated in one or more other figures to produce embodiments that are not explicitly illustrated or described. The combinations of features illustrated provide representative embodiments for typical applications. Various combinations and modifications of the features consistent with the teachings of this disclosure, however, could be desired for particular applications or implementations.
Various embodiments of the present disclosure provide a system and method for the autonomous valet parking using plenoptic cameras, and specifically reverse perpendicular parking a vehicle. Generally, the valet parking system uses plenoptic cameras (also known as light field cameras) to obtain images external to a vehicle. Using those images, the vehicle can identify available parking spaces and control the vehicle to park in the available space. The parking system is configured to use a plenoptic camera to obtain images external to the vehicle and to generate depth maps and images of the surrounding area. After generating the depth maps and images, the plenoptic camera sends the depth maps to the vehicle controller. The depth maps enable the controller to determine the distance between the vehicle and objects surrounding the vehicle, such as curbs, pedestrians, other vehicles, and the like. The controller uses the received depth maps and images, and map data, to generate an occupancy grid. The occupancy grid divides the area surrounding the vehicle into a plurality of distinct regions and, based on data received from the plenoptic camera, classified each region as either occupied (e.g. by all or part of an object) or unoccupied. The controller then identifies a desired parking space in one of a variety of different manners and, using the occupancy map, controls the vehicle to navigate to, and park in the desired parking space by traveling through the unoccupied regions identified in the occupancy map.
Referring to
The vehicle 20 includes a cabin having a display 46 in electronic communication with the controller 50. The display 46 may be a touchscreen that both displays information to the passengers of the vehicle and functions as an input. A person having ordinary skill in the art will appreciate that many different display and input devices are available and that the present disclosure is not limited to touchscreens. An audio system 48 is disposed within the cabin and may include one or more speakers for providing information and entertainment to the driver and/or passengers. The system 48 may also include a microphone for receiving inputs.
The vehicle 20 also includes a vision system for sensing areas external to the vehicle. The vision system may include a plurality of different types of sensors such as cameras, ultrasonic sensors, RADAR, LIDAR, and combinations thereof. In one embodiment, the vision system includes at least one plenoptic camera 52. In one embodiment, the vehicle 20 includes a single plenoptic camera 52 (also known as a light-field camera) located at a rear end of the vehicle. Alternatively, the vehicle 20 may include a plurality of plenoptic cameras located on several sides of the vehicle.
Plenoptic cameras have a series of focal points that allow the view point within an image to be shifted. Plenoptic cameras are capable of generating a depth map of the field of view of the camera and capturing images. A depth map provides depth estimates for pixels in an image from a reference viewpoint. The depth map is utilized to represent a spatial representation indicating the distance of objects from the camera and the distances between objects within the field of view. An example of using a light-field camera to generate a depth map is disclosed in U.S. Patent Application Publication No. 2015/0049916 by Ciurea et al., the contents of which are hereby incorporated by reference in its entirety. The camera 52 can detect, among other things, the presence of several objects in the field of view of the camera, generate a depth map and images based on the objects detected in the field of view of the camera 52, detect the presence of an object entering the field of view of the camera, and detect surface variation of a road surface and surrounding areas.
Referring to
Each of the imagers 56 may include a filter used to capture image data with respect to a specific portion of the light spectrum. For example, the filters may limit each of the cameras to detecting a specific spectrum of near-infrared light or of select portion of the visible light spectrum.
The camera module 54 may include charge collecting sensors that operate by converting the desired electromagnetic frequency into a charge proportional to the intensity of the electromagnetic frequency and the time that the sensor is exposed to the source. Charge collecting sensors, however, typically have a charge saturation point. When the sensor reaches the charge saturation point sensor damage may occur and/or information regarding the electromagnetic frequency source may be lost. To overcome potentially damaging the charge collecting sensors, a mechanism (e.g., shutter) may be used to proportionally reduce the exposure to the electromagnetic frequency source or control the amount of time the sensor is exposed to the electromagnetic frequency source. However, a trade-off is made by reducing the sensitivity of the charge collecting sensor in exchange for preventing damage to the charge collecting sensor when a mechanism is used to reduce the exposure to the electromagnetic frequency source. This reduction in sensitivity may be referred to as a reduction in the dynamic range of the charge collecting sensor, The dynamic range refers to the amount of information (bits) that may be obtained by the charge collecting sensor during a period of exposure to the electromagnetic frequency source.
The vision system is in electrical communication with the controller 50 for controlling the function of various components. The controller may communicate via a serial bus (e.g., Controller Area Network (CAN)) or via dedicated electrical conduits. The controller generally includes any number of microprocessors, ASICs, ICs, memory (e.g., FLASH, ROM, RAM, EPROM and/or EEPROM) and software code to co-act with one another to perform a series of operations. The controller also includes predetermined data, or “look up tables” that are based on calculations and test data, and are stored within the memory. The controller may communicate with other vehicle systems and controllers over one or more wired or wireless vehicle connections using common bus protocols (e.g., CAN and LIN). Used herein, a reference to “a controller” refers to one or more controllers. The controller 50 receives signals from the vision system and includes memory containing machine-readable instructions for processing the data from the vision system. The controller 50 is programmed to output instructions to at least a display 46, an audio system 48, the steering system 30, and the braking system 24, and the powerplant 21 to autonomously operate the vehicle.
The processor 64 may be any suitable processing device or set of processing devices such as, a microprocessor, a microcontroller-based platform, a suitable integrated circuit, or one or more application-specific integrated circuits configured to execute the set of instructions 68. The main memory 66 may be any suitable memory device such as, but not limited to, volatile memory (e.g. RAM), non-volatile memory (e.g. disk memory, FLASH memory, etc.), unalterable memory (e.g. EPROMs), and read-only memory.
The system 62 includes one or more plenoptic cameras 52 in communication with the controller 50. The system 62 also includes a communications interface 70 having a wired and/or wireless network interface to enable communication with an external network 86. The external network 86 may be a collection of one or more networks, including standard-based networks (3G, 4G, Universal Mobile Telecommunications Systems (UMTS), GSM (R) Association, WiFi, GPS, Bluetooth and others) available at the time of filing of this application or that may be developed in the future. Further, the external network may be a public network, such as the Internet, or private network such as an intranet, or a combination thereof.
In some embodiments, the set of instructions 68, stored on the memory 66 and that are executable to enable functionality of the system 62, may be downloaded from an off-site server via the external network 86. Further, in some embodiments, the parking system 62 may communicate with a central command server via the external network 86. For example, the parking system 62 may communicate image information obtained by the cameras 52 to the central command server by controlling the communications interface 70 to transmit the images to the central command server via the network 86. The parking system 62 may also communicate any generated data maps to the central command server.
The parking system 62 is also configured to communicate with a plurality of vehicle components and vehicle systems via one or more communication buses. For example the controller 50 may communicate with input devices 72, output devices 74, a disk drive 76, a navigation system 82, and a vehicle control system 84. The input devices 72 may include any suitable input devices that enable a driver or passenger of the vehicle to input modification or updates to information referenced by the parking system 62. The input devices may include for example the control knob, an instrument panel, keyboard, scanner, a digital camera for image capture and/or visual command recognition, a touchscreen, audio input device, buttons, a mouse, or touchpad. The output devices 74 may include instrument cluster outputs, a display (e.g. display 46), and speakers (such as speakers 48).
The disk drive 76 is configured to receive a computer readable medium 78. The disk drive 76 receives the computer readable medium 78 on which one or more sets of instructions 80, such as the software for operating the parking system 62 can be embedded. Further, the instructions 80 may embody one or more of the methods or logic as described herein. The instructions 80 may reside completely, or at least partially, within any one or more of the main memory 66, the computer readable medium 78 and/or within the processor 64 during execution of the instructions by the processor.
While the computer-readable medium is shown to be a single medium, the term “computer-readable medium” includes a single medium or multimedia, such as a centralized or distributed database, and associated catches and servers that store one or more sets of instructions. The term “computer-readable medium” also includes any tangible medium that is capable of storing, encoding or carrying a set of instructions for execution by processor or the cause a computer to perform any one or more of the methods or operations described herein.
Referring to
The vehicle 100 includes a one or more plenoptic cameras 104. In the illustrated embodiment, the vehicle 100 includes several plenoptic cameras providing 360° view surrounding the vehicle 100. As described above, the plenoptic cameras 104 capture images of this area surrounding the vehicle. Using this data, a vehicle controller 106 generates an occupancy grid 108. The light posts 110 and 112 may be some of the identifiable features used by the controller 106 to determine the position of the vehicle 100 on the map.
The occupancy grid 108 is partitioned into a plurality of zones or regions 114. Each zone 114 may have an individual status, such as occupied or unoccupied. The zones have an occupied status if an object is detected within at least a portion of the zone 114. The zones have an unoccupied status if objects are not present within the zones. Based on statuses of the zones, the controller is able to determine one or more drivable paths for the vehicle 100.
The driver of the vehicle 100, or the parking manager may choose the parking spot in which the vehicle 100 is going to park. In the illustrated example, the vehicle 100 is going to park in parking space 116 as it is the only remaining parking space available. Parking space 116 is delineated by a pair of side parking lines 118 and a front parking line 120. The parking lines may be included in the map data or may be populated onto the occupancy grid using the plenoptic cameras, which unlike RADAR sensors, are able to detect painted lines on the pavement. If the vehicle 100 is a fully autonomous vehicle, the vehicle may drive itself to space 116 and park itself automatically. Or the vehicle 100 may only be a semi-autonomous vehicle, in which case the driver will navigate the vehicle to parking space 116 at which point the vehicle will take over and autonomously or semi-autonomously reverse perpendicular park itself in space 116.
At operation 154 possible parking locations are identified. The parking locations may be identified by either the controller, by a driver of the vehicle, or assigned by a parking manager of the parking lot. In one embodiment, the controller identifies possible parking locations using the data supplied by the plenoptic camera.
At operation 156 one of the identified parking locations from operation 154 are selected to be the parking spot. The parking location may be selected by either the driver, or the vehicle controller. In one embodiment, a vehicle display shows possible parking locations to the driver, whom then chooses a parking spot via a user interface, such as a touchscreen. In another embodiment, the vehicle controller chooses the parking spot. The vehicle software may include a ranking algorithm that the controller uses in order to choose the parking spot.
At operation 158 the controller calculates a position of the vehicle. The position of the vehicle may be calculated as described above with reference to
Once the parking spot is chosen, a path from the current vehicle location to the selected spot is calculated at operation 162. The path may be calculated using the occupancy grid. The vehicle's current location is known on the occupancy grid as is the selected parking spot. The controller is programmed with the driving constraints of the vehicle (such as turning radius, vehicle dimensions, ground clearance, and the like) and calculates a path, based on the driving constraints, through the unoccupied zones of the occupancy grid. The path includes both position information and velocity information. At operation 164 the controller determines if a path was found at operation 162. If at operation 162, the controller was unable to calculate a path, the path is marked as “unsuitable or the like” at operation 170, and control loops back to operation 154 and additional parking locations are identified. If a suitable path was found, control passes operation 166.
At operation 166 the controller generates steering, braking, and/or propulsion commands for the vehicle based on the calculated path to park the vehicle in the selected spot. Depending upon the embodiment the vehicle may automatically control both the steering, and the propulsion and braking, or may only control the steering and allow the driver to determine the appropriate propulsion and braking.
The steering, braking, and/or propulsion commands are based on an occupancy grid indicating occupied areas and unoccupied areas around the vehicle. The commands may be further based on map data defining parking spots relative to a topological feature contained within the lot, and plenoptic camera data defining a plurality of depth maps and corresponding images.
In one embodiment, the vehicle motion is controlled using position and orientation state estimates (POSE). It is reasonable to assume that the parking maneuver will be at low speeds well within the limits of tire adhesion. At low speeds, a relatively simple path-following controller can calculate the steering, powertrain, and brake-system inputs to make the vehicle follow a desired path. One such algorithm uses the heading error and lateral offset to calculate a desired vehicle-path curvature. For example, the path may be calculated using equation 1 below.
U
κ=κr+kηδη+kψδψ (1)
where Uκ=Commanded vehicle path curvature, κr=Desired path curvature, kη=Lateral path offset gain, δη=Lateral Path Offset, kψ=Heading error gain, and δψ=Heading error.
Using the equation above, a commanded vehicle path curvature is calculated. At low speeds each steering wheel position produces a unique vehicle path curvature. The steering wheel position that corresponds to the commanded path curvature is sent to the vehicle steering system such as an Electrical Power Assist Steering (EPAS). The EPAS steering system uses an electric motor and positon control system to produce the desired steering wheel angle. Using these equations, the vehicle may be park in the selected spot without entering an occupied area of the occupancy grid.
For propulsion control, the vehicle position error along the path (δs) is used to calculate a commanded velocity (Uv). Following a similar technique as above, equation 2 may be used to calculate Uv.
U
v
=V
r
+k
sδs (2)
where Vr=Desired path velocity, ks=Longitudinal path error gain, and δs=Longitudinal path error.
The commanded change in velocity is used to calculate commanded vehicle acceleration. The commanded vehicle acceleration is scaled by vehicle mass to calculate wheel torque. The wheel torque is produced by the vehicle powertrain and/or brake system. This applies to both conventional (gas), hybrid (gas electric) and electric vehicles.
At operation 168 the controller determines if the vehicle is at the desired location. If yes, the loop ends, if no, control passes back to operation 158 and the vehicle attempts to park the vehicle in the location selected at operation 156.
While exemplary embodiments are described above, it is not intended that these embodiments describe all possible forms encompassed by the claims. The words used in the specification are words of description rather than limitation, and it is understood that various changes can be made without departing from the spirit and scope of the disclosure. As previously described, the features of various embodiments can be combined to form further embodiments of the invention that may not be explicitly described or illustrated. While various embodiments could have been described as providing advantages or being preferred over other embodiments or prior art implementations with respect to one or more desired characteristics, those of ordinary skill in the art recognize that one or more features or characteristics can be compromised to achieve desired overall system attributes, which depend on the specific application and implementation. These attributes can include, but are not limited to cost, strength, durability, life cycle cost, marketability, appearance, packaging, size, serviceability, weight, manufacturability, ease of assembly, etc. As such, embodiments described as less desirable than other embodiments or prior art implementations with respect to one or more characteristics are not outside the scope of the disclosure and can be desirable for particular applications.