The present disclosure relates to an image processing device, an image processing method, and a program, and more particularly, to an image processing device, an image processing method, and a program that can make it easier to check a surrounding situation.
Conventionally, an image processing device has been put to practical use, in which image processing of converting an image captured at a wide angle by a plurality of cameras mounted on a vehicle into an image of view of looking down the periphery of the vehicle from above is performed and the result image is presented to a driver for the purpose of use in parking of a vehicle. Furthermore, with the spread of automatic driving in the future, it is expected that the surrounding situation can be checked even during traveling.
For example, Patent Document 1 discloses a vehicle periphery monitoring device that switches a viewpoint for viewing a vehicle and presents it to a user in accordance with a shift lever operation or a switch operation.
However, in the vehicle periphery monitoring device as described above, since it is not considered that the viewpoint is switched according to the speed of the vehicle, for example, it is assumed that, when a vehicle travels at high speed, a sufficient front view is not ensured with respect to the speed of the vehicle so that checking the surrounding situation is difficult. Furthermore, since operation information of a shift lever is used in switching the viewpoint, it is necessary to process a signal via an electronic control unit (ECU), which may cause a delay.
The present disclosure has been made in view of such a situation, and is intended to make it easier to check a surrounding situation.
An image processing device according to an aspect of the present disclosure includes: a determination part that determines a predetermined viewpoint of a viewpoint image related to periphery of a moving object in a case of viewing the moving object from the viewpoint according to a speed of the moving object that can move at an arbitrary speed; a generation part that generates the viewpoint image that is a view from the viewpoint determined by the determination part; and a synthesis part that synthesizes an image related to the moving object at a position where the moving object can exist in the viewpoint image.
An image processing method according to an aspect of the present disclosure includes, by an image processing device that performs image processing: determining a predetermined viewpoint of a viewpoint image related to periphery of a moving object in a case of viewing the moving object from the viewpoint according to a speed of the moving object that can move at an arbitrary speed; generating the viewpoint image that is a view from the viewpoint determined; and synthesizing an image related to the moving object at a position where the moving object can exist in the viewpoint image.
A program according to an aspect of the present disclosure causes a computer of an image processing device that performs image processing to perform image processing including: determining a predetermined viewpoint of a viewpoint image related to periphery of a moving object in a case of viewing the moving object from the viewpoint according to a speed of the moving object that can move at an arbitrary speed; generating the viewpoint image that is a view from the viewpoint determined; and synthesizing an image related to the moving object at a position where the moving object can exist in the viewpoint image.
In an aspect of the present disclosure, a predetermined viewpoint of a viewpoint image related to periphery of a moving object in a case of viewing the moving object from the viewpoint is determined according to a speed of the moving object that can move at an arbitrary speed, the viewpoint image that is a view from the viewpoint determined is generated, and an image related to the moving object is synthesized at a position where the moving object can exist in the viewpoint image.
According to an aspect of the present disclosure, it is possible to make it easier to check a surrounding situation.
Note that the effects described herein are not necessarily limited, and any of the effects described in the present disclosure may be applied.
Specific embodiments to which the present technology is applied will be described in detail below with reference to the drawings.
<Configuration Example of Image Processing Device>
As shown in
For example, the image processing device 11 is used by being mounted on a vehicle 21 as shown in
Then, the distortion correction part 12 of the image processing device 11 is supplied with a plurality of visible images from each of the plurality of RGB cameras 23, and a depth image synthesis part 14 of the image processing device 11 is supplied with a plurality of depth images from each of the plurality of the distance sensors 24.
The distortion correction part 12 performs distortion correction processing of correcting distortion occurring in a wide-angle and high-resolution visible image supplied from the RGB camera 23 due to capturing at a wide angle of view. For example, correction parameters according to the lens design data of the RGB camera 23 are prepared in advance for the distortion correction part 12. Then, the distortion correction part 12 divides the visible image into a plurality of small blocks, converts the coordinates of each pixel in each small block into coordinates after correction according to the correction parameters, transfers the converted coordinates, complements a gap in a pixel of a transfer destination with a Lanczos filter or the like, and then clips the complemented one into a rectangle. Through such distortion correction processing, the distortion correction part 12 can correct distortion occurring in a visible image acquired by capturing at a wide angle.
For example, the distortion correction part 12 applies distortion correction processing to a visible image in which distortion has occurred as shown in the upper side of
The visible image memory 13 stores the visible images supplied from the distortion correction part 12 for a predetermined number of frames. Then, the past visible image stored in the visible image memory 13 is read out from the visible image memory 13 as a past frame visible image at a timing necessary for performing processing in the viewpoint conversion image generation part 16.
The depth image synthesis part 14 uses the visible image that has been subjected to the distortion correction and is supplied from the distortion correction part 12 as a guide signal, and performs synthesizing processing to improve the resolution of the depth image obtained by capturing the direction corresponding to each visible image. For example, the depth image synthesis part 14 can improve the resolution of the depth image, which is generally sparse data, by using a guided filter that expresses the input image by linear regression of the guide signal. Then, the depth image synthesis part 14 supplies the depth image with the improved resolution to the depth image memory 15 and the viewpoint conversion image generation part 16. Note that, in the following, the depth image that is obtained by the depth image synthesis part 14 performing synthesizing processing on the latest depth image supplied from the distance sensor 24 and is supplied to the viewpoint conversion image generation part 16 is referred to as a current frame depth image as appropriate.
The depth image memory 15 stores the depth images supplied from the depth image synthesis part 14 for a predetermined number of frames. Then, the past depth image stored in the depth image memory 15 is read from the depth image memory 15 as a past frame depth image at a timing necessary for performing processing in the viewpoint conversion image generation part 16.
For example, the viewpoint conversion image generation part 16 generates a viewpoint conversion image by performing the viewpoint conversion for a current frame visible image supplied from the distortion correction part 12, or a past frame visible image read from the visible image memory 13, such that the viewpoint looks down the vehicle 21 from above. Moreover, the viewpoint conversion image generation part 16 can generate a more optimal viewpoint conversion image by using the current frame depth image supplied from the depth image synthesis part 14 and the past frame depth image read from the depth image memory 15.
At this time, the viewpoint conversion image generation part 16 can set the viewpoint so that a viewpoint conversion image that looks down the vehicle 21 at an optimal viewpoint position and line-of-sight direction can be generated according to the traveling direction and the vehicle speed of the vehicle 21. Here, with reference to
For example, as shown in
Furthermore, as shown in
Moreover, as shown in
On the other hand, as shown in
Note that the viewpoint conversion image generation part 16 can set the origin of the viewpoint (gaze point) at the time of generating the viewpoint conversion image fixedly to the center of the vehicle 21 as shown in
For example, as shown in
The image processing device 11 configured as described above can set the viewpoint according to the speed of the vehicle 21 to generate the viewpoint conversion image that makes it easier to check the surrounding situation, and present the viewpoint conversion image to the driver. For example, the image processing device 11 can set the viewpoint such that a distant visual field can be sufficiently secured during high-speed traveling, so that viewing can be made easier and driving safety can be improved.
<First Configuration Example of Viewpoint Conversion Image Generation Part>
As shown in
The motion estimation part 31 uses the current frame visible image and the past frame visible image, as well as the current frame depth image and the past frame depth image to estimate a motion of an object (hereinafter, referred to as a moving object) that is moving and captured in those images. For example, the motion estimation part 31 performs a motion vector search (motion estimation: ME) on the same moving object captured in the visible images of a plurality of frames to estimate the motion of the moving object. Then, the motion estimation part 31 supplies a motion vector determined as a result of estimating the motion of the moving object to the motion compensation part 32 and the viewpoint determination part 35.
The motion compensation part 32 performs motion compensation (MC) of compensating the moving object captured in a certain past frame visible image to the current position on the basis of the motion vector of the moving object supplied from the motion estimation part 31. Therefore, the motion compensation part 32 can correct the position of the moving object captured in the past frame visible image so as to match the moving object to the position where the moving object should be located currently. Then, the past frame visible image subjected to the motion compensation is supplied to the image synthesis part 33.
The image synthesis part 33 reads the illustrative image of the vehicle 21 from the storage part 34, and generates an image synthesizing result (see
The storage part 34 stores, as advance information, data of the illustrative image of the vehicle 21 (image data that is related to the vehicle 21 and is of the vehicle 21 viewed from the rear or the front).
The viewpoint determination part 35 first calculates the speed of the vehicle 21 on the basis of the motion vector supplied from the motion estimation part 31. Then, the viewpoint determination part 35 determines a viewpoint at the time of generating a viewpoint conversion image that is a view from the viewpoint so that the viewpoint is of the viewpoint position and the line-of-sight direction according to the calculated speed of the vehicle 21, and supplies information indicating the viewpoint (for example, viewpoint coordinates (x, y, z) described with reference to
The projection conversion part 36 applies projection conversion to the image synthesizing result supplied from the image synthesis part 33 so that the image is a view from the viewpoint determined by the viewpoint determination part 35. Therefore, the projection conversion part 36 can acquire the viewpoint conversion image in which the viewpoint is changed according to the speed of the vehicle 21, and for example, outputs the viewpoint conversion image to a head-up display, a navigation device, and a subsequent display device such as an external device (not shown).
Here, the image synthesizing result output from the image synthesis part 33 will be described with reference to
For example, as shown in the upper part of
Thereafter, as shown in the middle part of
At this time, the image synthesis part 33 can synthesize the illustrative image of the vehicle 21 viewed from backward with respect to the past frame visible image of the current position of the vehicle 21 as viewed from the backward at the current position of the vehicle 21 to output the image synthesizing result as shown in the lower part of
As described above, the viewpoint conversion image generation part 16 can generate the viewpoint conversion image in which the viewpoint is set according to the speed of the vehicle 21. At this time, in the viewpoint conversion image generation part 16, since the viewpoint determination part 35 can internally determine the speed of the vehicle 21, for example, the processing of an electronic control unit (ECU) is not required, and the viewpoint can be determined with a low delay.
<Second Configuration Example of Viewpoint Conversion Image Generation Part>
As shown in
The steering wheel operation and the speed of the vehicle 21 and the like are supplied to the viewpoint determination part 35A from an ECU (not shown) as own vehicle motion information. Then, the viewpoint determination part 35A uses the own vehicle motion information to determine a viewpoint at the time of generating a viewpoint conversion image of a view from the viewpoint so that the viewpoint is of the viewpoint position and the line-of-sight direction according to the speed of the vehicle 21, and supplies information indicating the viewpoint to the perspective projection conversion part 44. Note that the detailed configuration of the viewpoint determination part 35A will be described later with reference to
The matching part 41 performs matching of a plurality of corresponding points set on the surface of an object around the vehicle 21 using the current frame visible image, the past frame visible image, the current frame depth image, and the past frame depth image.
For example, as shown in
The texture generation part 42 stitches the current frame visible image and the past frame visible image so as to match the corresponding points thereof on the basis of the matching result of the visible images supplied from the matching part 41. Then, the texture generation part 42 generates a texture for expressing the surface and texture of the object around the vehicle 21 from the visible image acquired by stitching, and supplies the texture to the perspective projection conversion part 44.
The three-dimensional model configuration part 43 stitches the current frame depth image and the past frame depth image so as to match the corresponding points thereof on the basis of the matching result of the depth images supplied from the matching part 41. Then, the three-dimensional model configuration part 43 forms a three-dimensional model for three-dimensionally expressing an object around the vehicle 21 from the depth image acquired by the stitching and supplies the three-dimensional model to the perspective projection conversion part 44.
The perspective projection conversion part 44 applies the texture supplied from the texture generation part 42 to the three-dimensional model supplied from the three-dimensional model configuration part 43, creates a perspective projection image of the three-dimensional model attached with the texture viewed from the viewpoint determined by the viewpoint determination part 35A, and supplies the perspective projection image to the image synthesis part 45. For example, the perspective projection conversion part 44 can create a viewpoint conversion image using a perspective projection conversion matrix represented by the following Equation (1). Here, Equation (1) represents perspective projection from an arbitrary point xV to the simultaneous coordinate expression y0 of the projection point x0 when the viewpoint xV3=−d and the projection plane xV3=0, and for example, when d is infinite, parallel projection is established.
The image synthesis part 45 reads the illustrative image of the vehicle 21 from the storage part 46 and synthesizes the illustrative image of the vehicle 21 according to the current position of the vehicle 21 in the perspective projection image supplied from the perspective projection conversion part 44. Therefore, the image synthesis part 45 can acquire the viewpoint conversion image as described above with reference to
The storage part 46 stores, as advance information, data of the illustrative image of the vehicle 21 (image data that is of an image related to the vehicle 21 and is of the vehicle 21 viewed from each viewpoint).
As described above, the viewpoint conversion image generation part 16A can generate the viewpoint conversion image in which the viewpoint is set according to the speed of the vehicle 21. At this time, the viewpoint conversion image generation part 16A can generate a viewpoint conversion image in which a degree of freedom is higher and blind spots is reliably reduced by using a three-dimensional model.
<Configuration Example of Viewpoint Determination Part>
With reference to
As shown in
Here, as shown in
The parameter calculation part 51 calculates the parameter r indicating a distance from the center of the vehicle 21 to the viewpoint, and the parameter θ indicating the angle formed by the viewpoint directions with respect to the vertical line passing through the center of the vehicle 21 according to the vehicle speed indicated by the own vehicle motion information as described above, and supplies the parameters to the viewpoint coordinate calculation part 54.
For example, the parameter calculation part 51 can determine the parameter r on the basis of the relationship between the speed and the parameter r as shown in A of
Similarly, the parameter calculation part 51 can determine the parameter θ on the basis of the relationship between the speed and the parameter θ as shown in B of
The θ lookup table storage part 52 stores a relationship as shown in B of
The r lookup table storage part 53 stores a relationship as shown in A of
The viewpoint coordinate calculation part 54 uses the parameter r and the parameter C supplied from the parameter calculation part 51 and the parameter φ (for example, information indicating steering wheel operation) indicated by the own vehicle motion information as described above to calculate the viewpoint coordinates and supplies the viewpoint coordinates to the corrected viewpoint coordinate calculation part 57. For example, as shown in
The origin coordinate correction part 55 calculates an origin correction vector Xdiff indicating the direction and magnitude of the origin correction amount for moving the origin from the center of the vehicle 21 according to the vehicle speed indicated by the own vehicle motion information as described above, and supplies the origin correction vector Xdiff to the corrected viewpoint coordinate calculation part 57.
For example, the origin coordinate correction part 55 can determine the origin correction vector Xdiff on the basis of the relationship between the speed and the origin correction vector Xdiff as shown in
The X lookup table storage part 56 stores a relationship as shown in
The corrected viewpoint coordinate calculation part 57 performs a correction according to the origin correction vector Xdiff on the viewpoint coordinates (x0, y0, z0) when the own vehicle supplied from the viewpoint coordinate calculation part 54 is used as a center to move the origin, and calculates the corrected viewpoint coordinates. Then, the corrected viewpoint coordinate calculation part 57 outputs the calculated viewpoint coordinates as final viewpoint coordinates (x, y, z), and supplies the final viewpoint coordinates to, for example, the perspective projection conversion part 44 in
The viewpoint determination part 35A is configured as described above, and can determine an appropriate viewpoint according to the speed of the vehicle 21.
Furthermore, by correcting the origin coordinates in the viewpoint determination part 35A, that is, by adjusting the x-coordinate of the viewpoint origin according to the speed vector of the vehicle 21, for example, as described with reference to
<Processing Example of Image Processing>
The image processing performed in the image processing device 11 will be described with reference to
For example, when the image processing device 11 is supplied with power and activated, processing starts and the image processing device 11 acquires a visible image and a depth image captured by the RGB camera 23 and the distance sensor 24 in
In step S12, the distortion correction part 12 corrects the distortion occurring in the visible image captured at a wide angle, and supplies the resultant to the visible image memory 13, the depth image synthesis part 14, and the viewpoint conversion image generation part 16.
In step S13, the depth image synthesis part 14 synthesizes the depth image so as to improve the resolution of the low-resolution depth image by using the visible image supplied from the distortion correction part 12 in step S12 as a guide signal, and supplies the synthesized image to the depth image memory 15 and the viewpoint conversion image generation part 16.
In step S14, the visible image memory 13 stores the visible image supplied from the distortion correction part 12 in step S12, and the depth image memory 15 stores the depth image supplied from the depth image synthesis part 14 in step S13.
In step S15, the viewpoint conversion image generation part 16 determines whether or not the past frame image required for the processing is stored in the memory, that is, whether or not the past frame visible image is stored in the visible image memory 13, and the past frame depth image is stored in the depth image memory 15. Then, the processing of steps S11 to S15 is repeatedly performed until the viewpoint conversion image generation part 16 determines that the past frame image required for the processing is stored in the memory.
In step S15, in a case where the viewpoint conversion image generation part 16 determines that the past frame image is stored in the memory, the process proceeds to step S16. In step S16, the viewpoint conversion image generation part 16 reads the current frame visible image supplied from the distortion correction part 12 in the immediately preceding step S12, and the current frame depth image supplied from the depth image synthesis part 14 in the immediately preceding step S13.
Furthermore, at this time, the viewpoint conversion image generation part 16 reads the past frame visible image from the visible image memory 13 and reads the past frame depth image from the depth image memory 15.
In step S17, the viewpoint conversion image generation part 16 uses the current frame visible image, the current frame depth image, the past frame visible image, and the past frame depth image read in step S16 to perform viewpoint conversion image generation processing (processing of
In step S21, the motion estimation part 31 uses the current frame visible image and the past frame visible image, as well as the current frame depth image and the past frame depth image to calculate the motion vector of the moving object, and supplies the motion vector to the motion compensation part 32 and the viewpoint determination part 35.
In step S22, the motion compensation part 32 performs motion compensation on the past frame visible image on the basis of the motion vector of the moving object supplied in step S21, and supplies the motion-compensated past frame visible image to the image synthesis part 33.
In step S23, the image synthesis part 33 reads the data of the illustrative image of the vehicle 21 from the storage part 34.
In step S24, the image synthesis part 33 superimposes the illustrative image of the vehicle 21 read in step S23 on the motion-compensated past frame visible image supplied from the motion compensation part 32 in step S22, and supplies the image synthesizing result obtained as a result thereof to the projection conversion part 36.
In step S25, the viewpoint determination part 35 calculates the speed vector of the vehicle 21 on the basis of the motion vector of the moving object supplied from the motion estimation part 31 in step S21.
In step S26, the viewpoint determination part 35 determines the viewpoint at the time of generating the viewpoint conversion image such that the viewpoint position and the line-of-sight direction correspond to the speed vector of the vehicle 21 calculated in step S25.
In step S27, the projection conversion part 36 performs projection conversion on the image synthesizing result supplied from the image synthesis part 33 in step S24 so that the image is a view from the viewpoint determined by the viewpoint determination part 35 in step S26. Therefore, the projection conversion part 36 generates the viewpoint conversion image and outputs the viewpoint conversion image to, for example, a display device (not shown) at the subsequent stage, and then the viewpoint conversion image generation processing ends.
In step S31, the viewpoint determination part 35A and the three-dimensional model configuration part 43 acquire the own vehicle motion information at the current point.
In step S32, the matching part 41 matches the corresponding points between the current frame visible image and the past frame visible image, and also matches the corresponding points between the current frame depth image and the past frame depth image.
In step S33, the texture generation part 42 stitches the current frame visible image and the past frame visible image according to the corresponding points of the images that have been matched by the matching part 41 in step S32.
In step S34, the texture generation part 42 generates a texture from the visible image acquired by stitching in step S33, and supplies the texture to the perspective projection conversion part 44.
In step S35, the three-dimensional model configuration part 43 stitches the current frame depth image and the past frame depth image so as to match the corresponding points of the images that have been matched by the matching part 41 in step S32.
In step S36, the three-dimensional model configuration part 43 generates a three-dimensional model formed on the basis of the depth image acquired by stitching in step S35, and supplies the three-dimensional model to the perspective projection conversion part 44.
In step S37, the viewpoint determination part 35A uses the own vehicle motion information acquired in step S31 to determine the viewpoint at the time of generating the viewpoint conversion image such that the viewpoint position and the line-of-sight direction correspond to the speed of the vehicle 21.
In step S38, the perspective projection conversion part 44 attaches the texture supplied from the texture generation part 42 in step S34 to the three-dimensional model supplied from the three-dimensional model configuration part 43 in step S36. Then, the perspective projection conversion part 44 performs perspective projection conversion for creating a perspective projection image of the three-dimensional model attached with the texture viewed from the viewpoint determined by the viewpoint determination part 35A in step S37, and supplies the perspective projection image to the image synthesis part 45.
In step S39, the image synthesis part 45 reads the data of the illustrative image of the vehicle 21 from the storage part 46.
In step S40, the image synthesis part 45 superimposes the illustrative image of the vehicle 21 read in step S39 on the perspective projection image supplied from the perspective projection conversion part 44 in step S38. Therefore, the image synthesis part 45 generates the viewpoint conversion image and outputs the viewpoint conversion image to, for example, a display device (not shown) at the subsequent stage, and then the viewpoint conversion image generation processing ends.
As described above, the image processing device 11 can change the viewpoint according to the speed of the vehicle 21 to create a viewpoint conversion image that makes it easier to grasp the surrounding situation, and can present the viewpoint conversion image to the driver. In particular, the image processing device 11 calculates the speed of the vehicle 21 from the past frame to achieve the processing with low delay without requiring the processing of the ECU, for example. Moreover, the image processing device 11 can grasp the shape of the peripheral object by using the past frame, and can reduce the blind spot of the viewpoint conversion image.
<Vehicle Configuration Example>
With reference to
As shown in
In the configuration example shown in
Furthermore, the RGB camera 23-3 and the distance sensor 24-3 are arranged in the right side of the vehicle 21, and the RGB camera 23-3 captures the right side of the vehicle 21 as shown by a broken line at a wide angle, and the distance sensor 24-3 senses a narrower range. Similarly, the RGB camera 23-4 and the distance sensor 24-4 are arranged in the left side of the vehicle 21, and the RGB camera 23-4 captures the left side of the vehicle 21 as shown by a broken line at a wide angle, and the distance sensor 24-4 senses a narrower range.
Note that the present technology can be applied to various mobile devices such as, for example, a wirelessly controlled robot and a small flying device (a so-called drone) other than the vehicle 21.
<Computer Configuration Example>
In the computer, a central processing unit (CPU) 101, a read only memory (ROM) 102, a random access memory (RAM) 103, and an electronically erasable and programmable read only memory (EEPROM) 104 are interconnected by a bus 105. An input and output interface 106 is further connected to the bus 105, and the input and output interface 106 is connected to the outside.
In the computer configured as described above, for example, the CPU 101 loads the program stored in the ROM 102 and the EEPROM 104 into the RAM 103 via the bus 105, and executes the program, so that the above-described series of processing is performed. Furthermore, the program executed by the computer (CPU 101) can be written in the ROM 102 in advance, or can be externally installed or updated in the EEPROM 104 via the input and output interface 106.
<<Application Examples>>
The technology according to the present disclosure can be applied to various products. For example, the technology according to the present disclosure may be realized as a device mounted on any type of mobile body such as an automobile, electric vehicle, hybrid electric vehicle, motorcycle, bicycle, personal mobility, airplane, drone, ship, robot, construction machine, or agricultural machine (tractor).
Each control unit includes a microcomputer that performs operation processing according to various programs, a storage part that stores programs executed by the microcomputer, parameters used for various operations, or the like, and a drive circuit that drives devices subjected to various control. Each control unit includes a network I/F for communicating with another control unit via the communication network 7010, and includes a communication I/F for communication by wired communication or wireless communication with vehicle interior or exterior device, a sensor, or the like.
The drive system control unit 7100 controls the operation of the device related to the drive system of the vehicle according to various programs. For example, the drive system control unit 7100 functions as a control device of a driving force generation device for generating a drive force of a vehicle such as an internal combustion engine or a driving motor, a drive force transmission mechanism for transmitting a drive force to wheels, a steering mechanism that adjusts a wheeling angle of the vehicle, a braking device that generates a braking force of the vehicle, and the like. The drive system control unit 7100 may have a function as a control device such as antilock brake system (ABS), or an electronic stability control (ESC).
A vehicle state detection part 7110 is connected to the drive system control unit 7100. The vehicle state detection part 7110 includes, for example, at least one of a gyro sensor that detects the angular velocity of the axis rotational motion of the vehicle body, an acceleration sensor that detects the acceleration of the vehicle, or a sensor for detecting an operation amount of an accelerator pedal, an operation amount of a brake pedal, steering of a steering wheel, an engine rotation speed, a wheel rotation speed, or the like. The drive system control unit 7100 performs operation processing using the signal input from the vehicle state detection part 7110 and controls the internal combustion engine, the driving motor, the electric power steering device, the brake device, or the like.
The body system control unit 7200 controls the operation of various devices mounted on the vehicle according to various programs. For example, the body system control unit 7200 functions as a keyless entry system, a smart key system, a power window device, or a control device for various lamps such as a head lamp, a back lamp, a brake lamp, a turn indicator, or a fog lamp. In this case, radio waves transmitted from a portable device that substitutes keys or signals of various switches may be input to the body system control unit 7200. The body system control unit 7200 receives input of these radio waves or signals and controls a door lock device, a power window device, a lamp, or the like of the vehicle.
The battery control unit 7300 controls a secondary battery 7310 that is a power supply source of the driving motor according to various programs. For example, information such as battery temperature, a battery output voltage, or remaining capacity of the battery is input to the battery control unit 7300 from the battery device including the secondary battery 7310. The battery control unit 7300 performs arithmetic processing using these signals and controls the temperature adjustment of the secondary battery 7310, or the cooling device or the like included in the battery device.
The vehicle exterior information detection unit 7400 detects information outside the vehicle equipped with the vehicle control system 7000. For example, at least one of the imaging part 7410 or the vehicle exterior information detection part 7420 is connected to the vehicle exterior information detection unit 7400. The imaging part 7410 includes at least one of a time of flight (ToF) camera, a stereo camera, a monocular camera, an infrared camera, or other cameras. The vehicle exterior information detection part 7420 includes, for example, at least one of an environmental sensor for detecting the current weather or climate, or an ambient information detection sensor for detecting another vehicle, an obstacle, a pedestrian, or the like around the vehicle equipped with the vehicle control system 7000.
The environmental sensor may be, for example, at least one of a raindrop sensor that detects rain, a fog sensor that detects mist, a sunshine sensor that detects sunshine degree, or a snow sensor that detects snowfall. The ambient information detection sensor may be at least one of an ultrasonic sensor, a radar device, or a light detection and ranging, laser imaging detection and ranging (LIDAR) device. The imaging part 7410 and the vehicle exterior information detection part 7420 may be provided as independent sensors or devices, respectively, or may be provided as a device in which a plurality of sensors or devices are integrated.
Here,
Note that
The vehicle exterior information detection parts 7920, 7922, 7924, 7926, 7928, and 7930 provided on the front, rear, side, or corner of the vehicle 7900 and the windshield in the upper portion of the vehicle compartment may be ultrasonic sensors or radar devices, for example. The vehicle exterior information detection parts 7920, 7926, and 7930 provided at the front nose, the rear bumper, or the back door of the vehicle 7900, and the upper portion of the windshield of the vehicle compartment may be the LIDAR device, for example. These vehicle exterior information detection parts 7920 to 7930 are mainly used for detecting a preceding vehicle, a pedestrian, an obstacle, or the like.
Returning to
Furthermore, the vehicle exterior information detection unit 7400 may perform image recognition processing of recognizing a person, a car, an obstacle, a sign, a character on a road surface, or the like, or distance detection processing, on the basis of the received image data. The vehicle exterior information detection unit 7400 performs processing such as distortion correction or positioning on the received image data and synthesizes the image data imaged by different imaging parts 7410 to generate an overhead view image or a panorama image. The vehicle exterior information detection unit 7400 may perform viewpoint conversion processing using image data imaged by different imaging parts 7410.
The vehicle interior information detection unit 7500 detects vehicle interior information. For example, a driver state detection part 7510 that detects the state of the driver is connected to the vehicle interior information detection unit 7500. The driver state detection part 7510 may include a camera for imaging the driver, a biometric sensor for detecting the biological information of the driver, a microphone for collecting sound in the vehicle compartment, and the like. The biometric sensor is provided on, for example, a seating surface, a steering wheel, or the like, and detects biometric information of an occupant sitting on a seat or a driver holding a steering wheel. The vehicle interior information detection unit 7500 may calculate the degree of fatigue or the degree of concentration of the driver on the basis of the detection information input from the driver state detection part 7510, and may determine whether or not the driver is sleeping. The vehicle interior information detection unit 7500 may perform processing such as noise canceling processing on the collected sound signal.
The integrated control unit 7600 controls the overall operation of the vehicle control system 7000 according to various programs. An input part 7800 is connected to the integrated control unit 7600. The input part 7800 is realized by a device such as a touch panel, a button, a microphone, a switch, or a lever that can be input operated by an occupant, for example. Data obtained by performing speech recognition on the sound input by the microphone may be input to the integrated control unit 7600. The input part 7800 may be, for example, a remote control device using infrared rays or other radio waves, or an external connection device such as a mobile phone or a personal digital assistant (PDA) corresponding to the operation of the vehicle control system 7000. The input part 7800 may be, for example, a camera, in which case the occupant can input information by gesture. Alternatively, data obtained by detecting the movement of the wearable device worn by the occupant may be input. Moreover, the input part 7800 may include, for example, an input control circuit or the like that generates an input signal on the basis of information input by an occupant or the like using the input part 7800 and outputs the input signal to the integrated control unit 7600. By operating the input part 7800, an occupant or the like inputs various data or gives an instruction on processing operation to the vehicle control system 7000.
The storage part 7690 may include a read only memory (ROM) that stores various programs to be executed by the microcomputer, and a random access memory (RAM) that stores various parameters, operation results, sensor values, or the like. Furthermore, the storage part 7690 may be realized by a magnetic storage device such as a hard disc drive (HDD), a semiconductor storage device, an optical storage device, a magneto-optical storage device, or the like.
The general-purpose communication I/F 7620 is a general-purpose communication I/F that mediates communication with various devices existing in an external environment 7750. A cellular communication protocol such as global system of mobile communications (GSM) (registered trademark), WiMAX (registered trademark), long term evolution (LTE (registered trademark)), or LTE-advanced (LTE-A), or other wireless communication protocols such as a wireless LAN (Wi-Fi (registered trademark)), or Bluetooth (registered trademark), may be implemented in the general-purpose communication I/F 7620. The general-purpose communication I/F 7620 may be connected to a device (for example, an application server or a control server) existing on an external network (for example, the Internet, a cloud network, or a company specific network) via a base station or an access point, for example. Furthermore, the general-purpose communication I/F 7620 uses, for example, the peer to peer (P2P) technology and may be connected with a terminal existing in the vicinity of the vehicle (for example, a terminal of a driver, a pedestrian, or a shop, or the machine type communication terminal (MTC).
The dedicated communication I/F 7630 is a communication I/F supporting a communication protocol formulated for use in a vehicle. For example, in the dedicated communication I/F 7630, a standard protocol such as the wireless access in vehicle environment (WAVE) that is combination of lower layer IEEE 802.11p and upper layer IEEE 1609, the dedicated short range communications (DSRC), or the cellular communication protocol may be implemented. Typically, the dedicated communication I/F 7630 performs V2X communication that is concept including one or more of a vehicle to vehicle communication, a vehicle to infrastructure communication, a vehicle to home communication, and a vehicle to pedestrian communication.
The positioning part 7640 receives, for example, a global navigation satellite system (GNSS) signal from a GNSS satellite (for example, a GPS signal from a global positioning system (GPS) satellite) and performs positioning, to generate position information including the latitude, longitude, and altitude of the vehicle. Note that the positioning part 7640 may specify the current position by exchanging signals with the wireless access point or may acquire the position information from a terminal such as a mobile phone, a PHS, or a smartphone having a positioning function.
The beacon reception part 7650 receives, for example, radio waves or electromagnetic waves transmitted from a radio station or the like installed on the road, and acquires information such as the current position, congestion, road closure, or required time. Note that the function of the beacon reception part 7650 may be included in the dedicated communication I/F 7630 described above.
The vehicle interior equipment I/F 7660 is a communication interface that mediates connection between the microcomputer 7610 and various interior equipment 7760 existing in the vehicle. The vehicle interior equipment I/F 7660 may establish a wireless connection using a wireless communication protocol such as wireless LAN, Bluetooth (registered trademark), near field communication (NFC), or a wireless USB (WUSB). Furthermore, the vehicle interior equipment I/F 7660 may establish wired connection such as a universal serial bus (USB), a high-definition multimedia interface (HDMI (registered trademark)), or a mobile high-definition link (MHL) via a connection terminal not shown (and a cable if necessary). The vehicle interior equipment 7760 may include, for example, at least one of a mobile device or a wearable device possessed by an occupant, or an information device carried in or attached to the vehicle. Furthermore, the vehicle interior equipment 7760 may include a navigation device that performs a route search to an arbitrary destination. The vehicle interior equipment I/F 7660 exchanges control signals or data signals with these vehicle interior equipment 7760.
The in-vehicle network I/F 7680 is an interface mediating communication between the microcomputer 7610 and the communication network 7010. The in-vehicle network I/F 7680 transmits and receives signals and the like according to a predetermined protocol supported by the communication network 7010.
The microcomputer 7610 of the integrated control unit 7600 controls the vehicle control system 7000 in accordance with various programs on the basis of information acquired via at least one of the general-purpose communication I/F 7620, the dedicated communication I/F 7630, the positioning part 7640, the beacon reception part 7650, the vehicle interior equipment I/F 7660, or the in-vehicle network I/F 7680. For example, the microcomputer 7610 may operate a control target value of the drive force generation device, the steering mechanism, or the braking device on the basis of acquired information inside and outside the vehicle, and output a control command to the drive system control unit 7100. For example, the microcomputer 7610 may perform cooperative control for the purpose of function realization of an advanced driver assistance system (ADAS) including collision avoidance or impact mitigation of the vehicle, follow-up running based on inter-vehicle distance, vehicle speed maintenance running, vehicle collision warning, vehicle lane departure warning, or the like. Furthermore, the microcomputer 7610 may perform cooperative control for the purpose of automatic driving or the like by which a vehicle autonomously runs without depending on the operation of the driver by controlling the drive force generation device, the steering mechanism, the braking device, or the like on the basis of the acquired information on the surroundings of the vehicle.
The microcomputer 7610 may generate three-dimensional distance information between the vehicle and an object such as a surrounding structure or a person on the basis of the information acquired via at least one of the general-purpose communication I/F 7620, the dedicated communication I/F 7630, the positioning part 7640, the beacon reception part 7650, the vehicle interior equipment I/F 7660, or the in-vehicle network I/F 7680, and create local map information including peripheral information on the current position of the vehicle. Furthermore, the microcomputer 7610 may predict danger such as collision of a vehicle, approach of a pedestrian or the like, or entry into a road where traffic is stopped on the basis of acquired information to generate a warning signal. The warning signal may be, for example, a signal for generating an alarm sound or for turning on a warning lamp.
The audio image output part 7670 transmits an output signal of at least one of audio and image to an output device capable of visually or audibly notifying the occupant of the vehicle or the outside of the vehicle, of information. In the example of
Note that, in the example shown in
Note that a computer program for realizing each function of the image processing device 11 according to the present embodiment described with reference to
In the vehicle control system 7000 described above, the image processing device 11 according to the present embodiment described with reference to
Furthermore, at least a part of the components of the image processing device 11 described with reference to
<Example of Configuration Combination>
Note that, the present technology can also adopt the following configuration.
(1)
An image processing device including:
a determination part that determines a predetermined viewpoint of a viewpoint image related to periphery of a moving object in a case of viewing the moving object from the viewpoint according to a speed of the moving object that can move at an arbitrary speed;
a generation part that generates the viewpoint image that is a view from the viewpoint determined by the determination part; and
a synthesis part that synthesizes an image related to the moving object at a position where the moving object can exist in the viewpoint image.
(2)
The image processing device according to (1) described above,
in which, in a case where the speed of the moving object is a first speed, the determination part determines the viewpoint such that an angle of a line-of-sight direction from the viewpoint to a vertical direction is large as compared to a case of a second speed in which the speed of the moving object is lower than the first speed.
(3)
The image processing device according to (1) or (2) described above,
further including
an estimation part that estimates motion of another object in periphery of the moving object to determine a motion vector, in which the determination part calculates the speed of the moving object on the basis of the motion vector determined by the estimation part, and determines the viewpoint.
(4)
The image processing device according to (3) described above,
further including
a motion compensation part that compensates the another object captured in a past image of the periphery of the moving object captured at a past time point to a position where the another object should be located currently on the basis of the motion vector determined by the estimation part,
in which the synthesis part synthesizes an image related to the moving object at a position where the moving object can currently exist in the past image on which motion compensation has been performed by the motion compensation part.
(5)
The image processing device according to (4) described above,
in which the generation part performs projection conversion according to the viewpoint on an image synthesizing result obtained by the synthesis part synthesizing the image related to the moving object with the past image to generate the viewpoint image.
(6)
The image processing device according to any one of (1) to (5) described above,
further including:
a texture generation part that generates a texture of another object in the periphery of the moving object from an image acquired by capturing the periphery of the moving object; and
a three-dimensional model configuration part that configures a three-dimensional model of the another object in the periphery of the moving object from a depth image acquired by sensing the periphery of the moving object,
in which the generation part performs perspective projection conversion of generating a perspective projection image of a view of the three-dimensional model attached with the texture viewed from the viewpoint, and
the synthesis part synthesizes the image related to the moving object at a position where the moving object can exist in the perspective projection image to generate the viewpoint image.
(7)
The image processing device according to any one of (1) to (6) described above,
in which the determination part determines the viewpoint at a position further rearward than the moving object when the moving object is moving forward, and at a position further forward than the moving object when the moving object is moving backward.
(8)
The image processing device according to (7) described above,
in which the determination part determines the viewpoint such that an angle of a line-of-sight direction from the viewpoint to a vertical direction is large when the moving object is moving forward than when the moving object is moving backward.
(9)
The image processing device according to any one of (1) to (8) described above,
in which the determination part determines the viewpoint according to the speed of the moving object determined from at least two images of the periphery of the moving object captured at different timings.
(10)
The image processing device according to any one of (1) to (9) described above,
in which the determination part moves an origin of the viewpoint from a center of the moving object by a moving amount according to the speed of the moving object.
(11)
The image processing device according to (10) described above,
in which in a case where the moving object is moving backward, the determination part moves the origin to a rear portion of the moving object.
(12)
The image processing device according to any one of (1) to (11) described above,
further including: a distortion correction part that corrects distortion occurring in an image acquired by capturing the periphery of the moving object at a wide angle; and
a depth image synthesis part that performs processing of improving resolution of a depth image acquired by sensing the periphery of the moving object, using the image whose distortion has been corrected by the distortion correction part as a guide signal,
in which generation of the viewpoint image uses a past frame and a current frame of the image whose distortion has been corrected by the distortion correction part, and a past frame and a current frame of the depth image whose resolution has been improved by the depth image synthesis part.
(13)
An image processing method including:
by an image processing device that performs image processing
determining a predetermined viewpoint of a viewpoint image related to periphery of a moving object in a case of viewing the moving object from the viewpoint according to a speed of the moving object that can move at an arbitrary speed;
generating the viewpoint image that is a view from the viewpoint determined; and
synthesizing an image related to the moving object at a position where the moving object can exist in the viewpoint image.
(14)
A program that causes
a computer of an image processing device that performs image processing to perform image processing including:
determining a predetermined viewpoint of a viewpoint image related to periphery of a moving object in a case of viewing the moving object from the viewpoint according to a speed of the moving object that can move at an arbitrary speed;
generating the viewpoint image that is a view from the viewpoint determined; and
synthesizing an image related to the moving object at a position where the moving object can exist in the viewpoint image.
Note that the present embodiment is not limited to the above-described embodiments, and various modifications are possible without departing from the gist of the present disclosure. Furthermore, the effects described in the present specification are merely examples and are not intended to be limiting, and other effects may be provided.
Number | Date | Country | Kind |
---|---|---|---|
2018-007149 | Jan 2018 | JP | national |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/JP2019/000031 | 1/4/2019 | WO | 00 |