An in-vehicle camera automatic-exposure apparatus and an in-vehicle automatic-exposure method according to a first embodiment of the present invention are described below with reference to
Of these in-vehicle cameras 9, 10, 11, and 12, the in-vehicle camera 9 is installed in a front part (for example, an emblem part) of the vehicle and is adapted to take an image of a scene in front of the vehicle. Hereinafter, the in-vehicle camera 9 will also be referred to as the front camera 9. The in-vehicle camera 10 is installed in a rear part (for example, a rear license garnish) of the vehicle and is adapted to take an image of a scene behind the vehicle. Hereinafter, the in-vehicle camera 10 will also be referred to as the back camera 10. The in-vehicle camera 11 is installed on a right side (for example, on a right-side door mirror) of the vehicle and is adapted to take an image of a scene to the right of the vehicle. Hereinafter, this in-vehicle camera 11 will also be referred to as the right side camera 11. The in-vehicle camera 12 is installed on a left side (for example, on a left-side door mirror) of the vehicle and is adapted to take an image of a scene to the left of the vehicle. Hereinafter, this in-vehicle camera 12 will also be referred to as the left side camera 12.
The in-vehicle cameras 9, 10, 11, and 12 are connected, via communication connection means 15 such as a cable, to an ECU (Electronic Control Unit) 14 which form the in-vehicle camera automatic-exposure apparatus together with the in-vehicle cameras 9, 10, 11, and 12. The ECU 14 may also be connected to a display 17 via communication connection means 16 such as a cable.
The ECU 14 receives data of the image of the scene in front of the vehicle from the front camera 9, data of the image of the scene behind the vehicle from the back camera 10, data of the image of the scene to the right of the vehicle from the right side camera 11, and data of the image of the scene to the left of the vehicle from the left side camera 12. The ECU 14 produces an image showing the vehicle and its nearby surroundings viewed from above the vehicle, from the image data supplied from the in-vehicle cameras 9, 10, 11, and 12. The ECU 14 displays the resultant produced image on the display 17.
The ECU 14 is described in further detail below with reference to
The ECU 14 may also include a back camera image input unit 20 whose input end is connected to the back camera 10. Analog image data output from the back camera 10 is input to the back camera image input unit 20. The back camera image input unit 20 converts the image data input from the back camera 10 from analog form into digital form, and supplies the resultant digital image data to a part in the ECU 14.
The ECU 14 may also include a right side camera image input unit 21 whose input end is connected to the right side camera 11. Analog image data output from the right side camera 11 is input to the right side camera image input unit 21. The right side camera image input unit 21 converts the input image data supplied from the right side camera 11 from analog form into digital form, and supplies the resultant digital image data to a part in the ECU 14.
The ECU 14 may also include a left side camera image input unit 22 whose input end is connected to the left side camera 12. Analog image data output from the left side camera 12 is input to the left side camera image input unit 22. The left side camera image input unit 22 converts the image data input from the left side camera 12 from analog form into digital form, and supplies the resultant digital image data to a part in the ECU 14.
The output end of each of the front camera image input unit 19, the back camera image input unit 20, the right side camera image input unit 21, and the left side camera image input unit 22 is connected to a camera image processing unit 24. Thus, the image data output from the respective in-vehicle cameras 9, 10, 11, and 12 are supplied to the camera image processing unit 24 via the camera image input units 19, 20, 21, and 22.
From the image data supplied from each of the in-vehicle cameras 9, 10, 11, and 12, the camera image processing unit 24 extracts a particular area of interest to be used to produce an image of the vehicle and its nearby surroundings viewed from above. Hereinafter, the particular area of interest to be used to produce such an image will be referred to simply as the particular area of interest. The extracted image in the particular area of interest is output from the camera image processing unit 24.
For example, to produce an image of the vehicle and its nearby surroundings viewed from above at a parking lot, a particular area is extracted from the image in the imaging area of each of the in-vehicle cameras 9, 10, 11 and 12 so as to include only the ground surface of the parking lot but include no unnecessary objects such as the sky. The other parts, other than the image in the particular area of interest, of the images output from the in-vehicle cameras 9, 10, 11, and 12 are discarded.
The rule of which area should be extracted for particular use from the image taken by each of the in-vehicle camera 9, 10, 11, and 12, that is, the rule of which area of the image taken by each of the in-vehicle camera 9, 10, 11, and 12 should be used may be preset in the ECU 14.
The output end of the camera image processing unit 24 is connected to a calculation unit 25 so that the data of the image in the particular area of interest is extracted from the data output from the in-vehicle cameras 9,10,11, and 12 by the camera image processing unit 24 and is input to the calculation unit 25.
The calculation unit 25 may be connected to a program storage unit 26 in which a program for producing the image showing the vehicle and the nearby surroundings thereof viewed from above the vehicle is stored. The calculation unit 25 executes this program to produce the image of the vehicle and its nearby surroundings viewed from above, using the data of the image of the particular area of interest extracted, by the camera image processing unit 24, from the in-vehicle cameras 9, 10, 11, and 12.
More specifically, the calculation unit 25 produces image data of the surroundings (background) of the vehicle from the data of the image in the particular area of interest extracted from the data output from the in-vehicle cameras 9, 10, 11, and 12, and the calculation unit 25, and produces the image (image data) of the vehicle and its nearby surroundings viewed from above the vehicle by combining the above-described image data and image data representing a plan view of the vehicle.
The output end of the calculation unit 25 may be connected to the display 17 such that the image of the vehicle and its nearby surroundings viewed from above the vehicle, produced by the calculation unit 25, is output to the display 17. Thus, the image of the vehicle and its nearby surroundings viewed from above the vehicle is displayed on the display 17.
In one embodiment, the calculation unit 25 may include a metering area setting unit 28 serving as the metering area setting apparatus which sets the metering area used in the automatic exposure when the in-vehicle cameras 9, 10, 11, and 12 take an image of areas surrounding the vehicle. Furthermore, in one embodiment, a program for setting the metering area is stored in the program storage unit 26. The metering area setting unit 28 sets the metering area by executing this program.
The calculation unit 25 may also be connected to a data storage unit 30 in which information indicating the installation locations on a vehicle of the in-vehicle cameras 9, 10, 11, and 12 (hereinafter, referred to simply as installation location information) is stored.
The calculation unit 25 may also be connected to a vehicle information detector 31 adapted to detect information indicating the current time (hereinafter, referred to simply as time information) and information indicating the direction of the vehicle (hereinafter, referred to simply as vehicle direction information. The detected information is stored in the vehicle. The above information may be acquired, for example, such that the vehicle information detector 31 is connected to an in-vehicle navigation apparatus 5 including a GPS receiver and/or a direction sensor, and the information is acquired from the in-vehicle navigation apparatus 5.
In one embodiment, the metering area setting unit 28 sets the metering area such that an optimum exposure condition is obtained for the particular area of interest of the image. More specifically, as shown in
The metering area 34 determined in the above-described manner for the rear camera 10 for use in the daytime may not include unnecessary objects such as the sky, and is very similar in shape and size to the particular area 33 of interest of the image. Thus, the resultant metering area 34 allows the exposure condition to be determined to adequately take into account the brightness of the ground surface 2 of the parking lot.
In one embodiment, the metering area setting unit 28 acquires the installation location information stored in the data storage unit 30, and, in accordance with the acquired installation location information, the metering area setting unit 28 sets the metering area adequately such that an exposure condition determined based on the metering area is optimal for the particular area of interest of the image, depending on the installation locations of the respective in-vehicle cameras 9, 10, 11, and 12.
The rear camera 10 may be installed, for example, in a rear license garnish part, or the like, that prevents the rear camera 10 from directly receiving sunlight, while the front camera 9 may be installed, for example, in an emblem part or the like that is difficult to prevent the front camera 9 from directly receiving sunlight.
When the front camera 9 takes an image of a surrounding scene including a ground surface 2 of a parking lot in the daytime, the metering area setting unit 28 sets the metering area 35 such that the upper end of the metering area 35 is located at a lower position than the upper end of the metering area 34 for the rear camera 10 shown in
That is, for use in the daytime, the metering area 35 for the front camera 9 is set depending on the camera installation location such that the upper end of the metering area 35 is lowered to prevent the metering area 35 from being influenced by direct sunlight. Thus, the resultant metering area 35 allows the exposure condition to be determined so as to adequately take into account the brightness of the ground surface 2 of the parking lot.
In an alternative embodiment, the metering area setting unit 28 acquires time information detected by the vehicle information detector 31 and sets the metering area depending on the acquired time information such that the exposure condition determined using the metering area depending on the time becomes optimum for the particular area of interest of the image. More specifically, for example, when the front camera 9 takes an image of a surrounding scene including the ground surface 2 of the parking lot in the night-time, it is not necessary to take into account the effects of direct sunlight. Thus, the metering area setting unit 28 sets the metering area 37, as shown in
In this case, the metering area 37 is set such that its upper end is located at the same position as the upper end of the particular area 36 of interest in the imaging area 1 of the front camera 9, and such that the metering area 37 best approximates the particular area 36 of interest.
The time-dependent metering area 37 determined in the above-described manner for the front camera 9 may not include unnecessary objects such as the sky, and is very similar in shape and size to the particular area 36 of interest in the imaging area of the front camera 9. Thus, use of this metering area 37 makes it possible to correctly determine the exposure condition for the image in the particular area of interest adequately taking into account the brightness of the ground surface 2 of the parking lot including an area 2a illuminated with light emitted from headlamps.
In another example, as described below with reference to
Thus, the setting of the metering area 38 in the above-described manner makes it possible for the exposure condition to be correctly determined taking into account the brightness of the ground surface 2 of the parking lot without being influenced by direct sunlight even at times at which the front camera 9 is likely to receive direct sunlight.
In an alternative embodiment, the metering area setting unit 28 acquires vehicle direction information detected by the vehicle information detector 31 and sets the metering area in accordance with the acquired vehicle direction information such that the determined metering area allows the exposure condition to be correctly determined for the image in the particular area of interest, depending on the vehicle direction. For example, as shown in
On the other hand, the metering area 41 for the left-side camera 12 is substantially not influenced by direct sunlight, and thus the metering area 41 is set such that the upper end of the metering area 41 is located at the same position as the upper end of the particular area 43 of interest to be extracted, for use to take the image of the ground surface 2 of the parking lot, from the imaging area 1 of the left-side camera 12 and such that the metering area 41 best approximates the particular area 43 of interest.
Thus, the metering areas 40 and 41 for use in the daytime for the left-side camera 12 and the right-side camera 11 are determined in the above-described manner depending on the direction of the vehicle 39, and the exposure conditions are determined adequately taking into account the brightness of the ground surface 2 of the parking lot.
Referring again to
The in-vehicle cameras 9, 10, 11, and 12 are described in further detail below. As shown in
Each of the in-vehicle cameras 9, 10, 11, and 12 may have a CCD (Charged Coupled Device) 47. The light passing through the wide angle lens 46 is focused on the light sensing surface of the CCD 47. The CCD 47 converts the light incident on the light sensing surface thereof into raw data via a photoelectric conversion, and supplies the resultant raw data to an internal part of each of the in-vehicle cameras 9, 10, 11, and 12.
The output end of the CCD 47 may be connected to an AFE (Analog Front End) 49 such that the raw data output from the CCD 47 is input to the AFE 49. The AFE 49 may include a CDS (Correlated Double Sampling) circuit 53, an AGC (Auto Gain Control) circuit 54 connected to the output end of the CDS circuit 53, an analog-to-digital converter 55 connected to the output end of the AGC circuit 54, and a TG (Timing Generator) 56. The CDS circuit 53 removes noise from the raw data input from the CCD 47 and outputs the resultant raw data to the AGC circuit 54.
The AGC circuit 54 controls the input gain of the raw data at a value corresponding to the determined exposure condition, and supplies the resultant data to the analog-to-digital converter 55, which converts the supplied raw data into a digital form, thereby automatically controlling the exposure of the in-vehicle cameras 9, 10, 11, and 12. Under the control of the AGC circuit 54 in terms of the input gain, the analog-to-digital converter 55 converts the raw data into digital data and outputs the resultant digital data to the outside of the AFE 49.
The TG 56 controls a horizontal scanning operation of the CCD 47.
The AFE 49 may be connected to a DSP (Digital Signal Processor) 50 such that the digital data converted from the raw data by the analog-to-digital converter 55 is input to the DSP 50. The DSP 50 converts the given raw data into a YUV signal by performing signal-processing including a correction (gamma-correction) process on the given raw data, and the DSP 50 outputs the resultant YUV signal. The camera controller 45 outputs a control signal to the DSP 50. The DSP 50 determines the exposure condition by performing metering based on the metering area according to the received control signal. Information associated with the determined exposure condition (hereinafter referred to as exposure information) is supplied to the AGC circuit 54 of the AFE 49.
Because the exposure condition is determined by the DSP 50 on the basis of the metering area set by the metering area setting unit 28, the determined exposure condition is optimal for the particular area of interest of the image. More specifically, in the case in which an image of the ground surface 2 of the parking lot shown in one of
The output end of the DSP 50 may be connected to a video encoder 51 such that the YUV signal output from the DSP 50 is input to the video encoder 51. The video encoder 51 converts the input YUV signal in digital form into an analog signal according to the NTSC (National Television Standards Committee) standard. The resultant NTSC signal is output, as image data taken by the in-vehicle camera 9, 10, 11, or 12, to the ECU 14.
In the present embodiment, there is provided a V-driver 57 between the TG 56 and the CCD 47 in the AFE 49. The V-driver 57 converts the voltage level of the voltage output from the TG 56 to a proper level and controls vertical scanning of the CCD 47 using the converted voltage.
The operation of the in-vehicle camera system 8 using the automatic exposure method according to the first embodiment of the present invention is described below. In the present embodiment, first, a user operates an input device such as a touch panel on the display 17 or a remote control (not shown) to input a command to the ECU 14 to display an image of the vehicle and the nearby surroundings thereof viewed from above the vehicle. In response to the input operation, the metering area setting unit 28 sets the metering area in accordance with at least one of the installation locations of the in-vehicle cameras 9, 10, 11, and 12, the time, and the vehicle direction such that an exposure condition is determined optimally for the particular area of interest of the image, such as that shown in one of
The camera controller 45 outputs a control signal to the DSP 50 of each of the in-vehicle cameras 9, 10, 11, and 12 to perform metering in accordance with the metering area information output from the metering area setting unit 28. The DSP 50 determines the exposure condition by performing metering using the metering area in accordance with the control signal output from the camera controller 45, and outputs the exposure information indicating the determined exposure condition to the AGC circuit 54 of the AFE 49. In accordance with the exposure information output from the DSP 50, the AGC circuit 54 controls the input gain for the analog-to-digital conversion of the raw data output from the CCD 47.
Thus, automatic exposure is performed in accordance with at least one of the installation locations of the in-vehicle cameras 9, 10, 11, and 12, the time, and the vehicle direction. For example, when an image of a surrounding scene including the ground surface 2 of the parking lot is taken in the daytime by the in-vehicle cameras 9, 10, 11, and 12, the metering area 34 of the rear camera 10 is set as shown in
For example, when an image of a surrounding scene including a ground surface 2 of a parking lot is taken in the nighttime by the in-vehicle cameras 9, 10, 11, and 12, because the metering area 37 for the front camera 9 is set as shown in
The raw data taken in the above-described manner and converted from analog form into digital form by the analog-to-digital converter 55 is output to the DSP 50. The DSP 50 converts the raw data supplied from the analog-to-digital converter 55 into a YUV signal and outputs the resultant YUV signal to the video encoder 51. The video encoder 51 converts the raw data output from the DSP 50 into a NTSC signal and outputs the resultant NTSC signal, as image data taken by the respective in-vehicle cameras 9, 10, 11, and 12, to the ECU 14.
If the ECU 14 receives the image data taken and output by the respective in-vehicle cameras 9, 10, 11, and 12, the respective image data is input to the camera image input units 19, 20, 21, and 22 corresponding to the in-vehicle cameras 9, 10, 11, and 12. Each of the camera image input units 19, 20, 21, and 22 converts the input image data from analog form into digital form and outputs the resultant digital image data to the camera image processing unit 24.
The camera image processing unit 24 extracts an image in the particular area of interest (for example, an image of the ground surface 2 of the parking lot) from the image data output from each of the camera image input units 19, 20, 21, and 22, and the camera image processing unit 24 supplies the extracted data of the image of the particular area of interest to the calculation unit 25. The calculation unit 25 produces an image of the vehicle and the surroundings thereof viewed from above by using the data of the images in the particular areas of interest output from the camera image processing unit 24. The produced image is displayed on the display 17.
Because the image in each particular area of interest is taken under the optimum exposure condition, it is possible to produce a good image 60 of the vehicle and the surrounding scene viewed from above, in which there is no significant difference in brightness among the area in front of the vehicle, the area behind the vehicle, the area to the left of the vehicle, and the area to the right of the vehicle, as shown in
An in-vehicle camera automatic-exposure apparatus and an in-vehicle automatic-exposure method according to a second embodiment of the present invention are described below with reference to
An in-vehicle camera automatic-exposure apparatus includes in-vehicle cameras 11 and 12 and an ECU 14. As in the first embodiment described above, the ECU 14 is connected to a display 17 to form an in-vehicle camera system 8. However, unlike the first embodiment, a right-side camera 11 or a left-side camera 12 takes an image of a vehicle and a surrounding scene including a road surface and extracts a particular area of the image taken by the right-side camera 11 or 12 to therefrom produce an image for checking whether there is an obstacle (hereinafter, referred to simply as an obstacle checking image), which is displayed on the display 17. To achieve the above object, the in-vehicle automatic-exposure apparatus may include only the right-side camera 11 and the left-side camera 12 as the in-vehicle cameras. However, to configure the in-vehicle automatic-exposure apparatus to also have the capability of displaying the image of the vehicle and the surrounding scene viewed from above as in the first embodiment, the in-vehicle automatic-exposure apparatus may include additional in-vehicle cameras 9 and 10.
In a specific configuration according to the present embodiment, the camera image processing unit 24 extracts a particular area of interest selected for use in producing an image for checking whether there is an obstacle (hereinafter, referred to simply as a particular area of interest) from the data of the image taken by the right-side camera 11 or the left-side camera 12, and the camera image processing unit 24 outputs the extracted image data to the calculation unit 25.
The rule of selecting a particular area of interest for the above-described purpose from the imaging area of the in-vehicle camera 11 or 12, that is, the extraction rule may be preset in the ECU 14 for each of the in-vehicle camera 11 or 12.
The calculation unit 25 produces the obstacle checking image by enlarging the image in the particular area of interest output from the camera image processing unit 24. The produced obstacle checking image is displayed on the display 17.
For example, when the right-side camera 11 takes an image of a surrounding scene including the surface 62 of a road on the right side of the vehicle going to turn to the right, as shown on the left side of
A program for producing the obstacle checking image is stored in the program storage unit 26, and the calculation unit 25 produces the obstacle checking image by executing this program.
The vehicle information detector 31 detects the running conditions of the vehicle, such as a running direction of the vehicle, a rudder angle of a steering wheel, etc. (hereinafter, the information indicating such running conditions will be referred to as running condition information). In the present embodiment, on the basis of the running condition information detected by the vehicle information detector 31, the calculation unit 25 determines which one of the cameras (the right-side camera 11 or the left-side camera 12) should be used to produce the obstacle checking image. For example, if the running condition information indicates that the vehicle is going to turn to the right, then the calculation unit 25 determines that the right-side camera 11 should be used to produce the image for checking that there is no obstacle in the turn-to-right way.
In the present embodiment, the metering area setting unit 28 sets the metering area such that an optimum exposure condition is obtained for a particular area of interest in the imaging area for the right-side camera 11 or the left-size camera 12 which has been determined to be used to produce the obstacle checking image. For example, when the vehicle is going to turn to the right, it is determined that the right-side camera 11 should be used to produce an image of a surrounding scene including a road surface 62 of a road to which the vehicle is going to turn. In this case, as shown in
The metering area setting unit 28 selects a set of blocks 32 so as to best approximate the particular area 63 of interest in the imaging area 1 of the right-side camera 11, thereby setting the metering area 64. Note that the particular area 63 to be used may be preset in the metering area setting unit 28.
The metering area 64 set in the above-described manner for the right-side camera 11 allows it to determine an exposure condition adequately taking into account the brightness of the road surface 62 of the road to which the vehicle is going to turn.
Information indicating the metering area set by the metering area setting unit 28 (hereinafter, referred to simply as metering area information) is output to the camera controller 45. The camera controller 45 outputs a control signal to the DSP 50 of the selected in-vehicle camera (the right-side camera 11 or the left-side camera 12) to perform metering in accordance with the metering area information.
The DSP 50 performs metering using the metering area in accordance with the control signal supplied from the camera controller 45 and determines the exposure condition. The DSP 50 then outputs information indicating the determined exposure condition (hereinafter referred to simply as exposure information) to the AGC circuit 54 of the AFE 49.
The AGC circuit 54 controls the input gain, used in the analog-to-digital conversion of the raw data, to a value corresponding to the exposure information thereby performing automatic exposure of the selected in-vehicle camera (the right-side camera 11 or the left-side camera 12).
The operation of the in-vehicle camera system 8 using the automatic exposure method according to the second embodiment of the present invention is described below. If a user (a driver) of a vehicle performs a driving operation to turn to the right or left at an intersection or the like, the vehicle information detector 31 detects the driving operation as driving condition information and outputs the driving condition information to the calculation unit 25.
In accordance with the driving condition information received from the vehicle information detector 31, the calculation unit 25 determines which in-vehicle camera (the right-side camera 11 or the left-side camera 12) should be used to produce the obstacle checking image.
The metering area setting unit 28 determines the metering area for the in-vehicle camera 11 or 12 selected by the calculation unit 25 such that the metering area allows the exposure condition to be determined so as to be optimal for the particular area of interest to be used to produce the obstacle checking image as shown in
The calculation unit 25 outputs metering area information indicating the metering area set by the metering area setting unit 28 to the camera controller 45. The camera controller 45 outputs the control signal to the DSP 50 of the selected in-vehicle camera (the right-side camera 11 or the left-side camera 12) to perform metering in accordance with the metering area information output by the calculation unit 25.
The DSP 50 performs metering using the metering area in accordance with the control signal supplied from the camera controller 45 and determines the exposure condition according to the metering result. The exposure information indicating the determined exposure condition is supplied to the AGC circuit 54 of the AFE 49.
In accordance with the exposure information received from the DSP 50, the AGC circuit 54 controls the input gain used in the analog-to-digital conversion of the raw data input from the CCD 47. The image is taken under the determined exposure condition, and thus optimum exposure is achieved for the particular area of interest to be used to produce the obstacle checking image.
The raw data is converted from analog form into digital form by the analog-to-digital converter 55 and is output to the DSP 50. The DSP 50 converts the raw data supplied from the analog-to-digital converter 55 into a YUV signal and outputs the resultant YUV signal to video encoder 51. The video encoder 51 converts the raw data received from the DSP 50 into a NTSC signal and outputs the resultant NTSC signal, as the image data of the in-vehicle camera 11 or 12, to the ECU 14.
The image data supplied to the ECU 14 from the in-vehicle camera 11 or 12 is input to the camera image input unit 21 or 22 corresponding to the selected in-vehicle camera 11 or 12. The camera image input unit 21 or 22 converts the input image data from analog form into digital form and outputs the resultant digital image data to the camera image processing unit 24.
The camera image processing unit 24 extracts the particular area of interest from the image data output from the camera image input unit 21 or 22 and outputs the extracted image data of the particular area of interest to the calculation unit 25. The calculation unit 25 produces the obstacle checking image by enlarging the image of the particular area of interest output from the camera image processing unit 24. The produced obstacle checking image is displayed on the display 17. Because the exposure has been performed so as to achieve the optimum exposure condition for the particular area of interest, the displayed obstacle checking image has adequate brightness.
As described above, the in-vehicle camera automatic-exposure method/apparatus according to the present invention allows the metering area to be adequately set so as to achieve an optimum exposure condition for a particular area of interest in the imaging area of each of the in-vehicle cameras 9, 10, 11, and 12. Thus it is possible to obtain optimum brightness for an image produced from the image of the particular area of interest.
The present invention has been described above with reference to specific embodiments. However, the present invention is not limited to these embodiments, but various modifications are possible without departing from the spirit and the scope of the invention. It is therefore intended that the foregoing detailed description be regarded as illustrative rather than limiting, and that it be understood that it is the following claims, including all equivalents, that are intended to define the spirit and scope of this invention.
Number | Date | Country | Kind |
---|---|---|---|
2006-133747 | May 2006 | JP | national |