The present disclosure relates to an image-pickup apparatus, an image-pickup display method, and an image-pickup display program.
In a camera installed in a vehicle so as to capture scenes in a traveling direction, for example, a technique for obtaining images having an appropriate brightness by automatic exposure control (AE: Automatic Exposure) has been known. Japanese Unexamined Patent Application Publication No. 2010-041668 discloses a technique for performing exposure control using the luminance of each of a plurality of predetermined areas for exposure control. Japanese Unexamined Patent Application Publication No. 2014-143547 discloses a technique for changing an area to be used for an exposure operation in accordance with the traveling speed of a vehicle.
The object or the area whose surroundings it is highly necessary to check during traveling of the vehicle varies depending on the operation state of the vehicle. However, when the brightness or the color of the entire angle of view or a predetermined partial area of the image is adjusted, the brightness or the color of the image may not become appropriate when the driver checks the object or the area whose surroundings it is highly necessary to check.
An image-pickup apparatus according to a first aspect of this embodiment includes: an image-pickup unit configured to capture an image of surroundings of a vehicle; a controller configured to control the image-pickup unit; an image processor configured to process image data output from the image-pickup unit; an output unit configured to output the image processed by the image processor to a display unit; and a detection unit configured to detect information regarding a course change of the vehicle, in which at least one of the image-pickup control carried out by the controller and the image processing carried out by the image processor applies weighting in such a way that the weighting becomes larger in a course change direction based on the information regarding the course change detected by the detection unit.
An image-pickup display method according to a second aspect of this embodiment includes: an image-pickup step for causing an image-pickup unit to capture an image, the image-pickup unit capturing an image of surroundings of a vehicle; a control step for controlling the image-pickup unit; an image processing step for processing image data captured in the image-pickup step; a display step for causing a display unit to display the image processed in the image processing step; and a detection step for detecting information regarding a course change of the vehicle, in which weighting is applied in such a way that the weighting becomes larger in a course change direction based on the information regarding the course change detected in the detection step in at least one of the control step and the image processing step.
An image-pickup display program according to a third aspect of this embodiment causes a computer to execute, when executing the following steps of: an image-pickup step for causing an image-pickup unit to capture an image, the image-pickup unit capturing an image of surroundings of a vehicle; a control step for controlling the image-pickup unit; an image processing step for processing image data captured in the image-pickup step; a display step for causing a display unit to display the image processed in the image processing step; and a detection step for detecting information regarding a course change of the vehicle, processing of applying weighting in such a way that the weighting becomes larger in a course change direction based on the information regarding the course change detected in the detection step in at least one of the control step and the image processing step.
While the present disclosure will be explained with reference to an embodiment of the present disclosure, the disclosure according to the claims is not limited to the following embodiment. Further, not all the configurations described in this embodiment are necessary as the means for solving the problem.
The display unit 160 is a display apparatus that can be replaced by a conventional rearview mirror. Like the conventional rearview mirror, a driver is able to check the rearward situation by observing the display unit 160 during the driving. While an LCD panel is employed as the display unit 160 in this embodiment, various kinds of display apparatuses such as an organic EL display or a head-up display other than the LCD panel may be employed. Further, the display unit 160 may be placed along with the conventional rearview mirror or may be an apparatus capable of switching a display mode by the display and a mirror mode by reflection in a one-way mirror using the one-way mirror.
The own vehicle 10 includes a millimeter wave radar 11 that detects the presence of another vehicle on the rear side of the vehicle. When there is another vehicle, the millimeter wave radar 11 outputs a millimeter wave radar signal as a detection signal. The millimeter wave radar signal includes information indicating the direction of the other vehicle (right rear, directly to the back, left rear) or the approach speed. The main body unit 130 acquires the signal from the millimeter wave radar 11 or the result of detecting the other vehicle by the millimeter wave radar 11.
The own vehicle 10 includes a steering wheel 12 that the driver uses for steering. The steering wheel 12 outputs a steering signal in a right direction when it is rotated to the right, and outputs a steering signal in a left direction when it is rotated to the left. The steering signal includes information indicating, in addition to the steering direction, a steering angle. The main body unit 130 acquires the steering signal via a Controller Area Network (CAN).
A blinker lever 13, which serves as a direction indicator, is provided on the side of the steering wheel 12. The blinker lever 13 outputs a blinker signal indicating the right direction when the driver presses the blinker lever 13 downwardly and indicating the left direction when the driver presses it upwardly. The main body unit 130 acquires the blinker signal or a signal indicating that the blinker has been operated via the CAN or the like.
A navigation system 14 is provided on the front left of the vehicle as viewed from the driver's seat. When the driver sets the destination, the navigation system 14 searches for the route, shows the route, and displays the current position of the own vehicle 10 on the map. The navigation system 14 outputs, when it shows a right or left turn, a navigation signal indicating the direction prior before it shows a right or left turn. The main body unit 130 is connected to the navigation system 14 by a wire or wirelessly in such a way that the main body unit 130 is able to acquire signals such as the navigation signal and data from the navigation system 14. Further, the image-pickup apparatus 100 may be one of the functions that the system including the navigation system 14 achieves.
The camera unit 110 mainly includes a lens 112, an image-pickup device 114, and an analog front end (AFE) 116. The lens 112 guides a subject light flux that is incident thereon to the image-pickup device 114. The lens 112 may be composed of a plurality of optical lens groups.
The image-pickup device 114 is, for example, a CMOS image sensor. The image-pickup device 114 adjusts a charge accumulation time by an electronic shutter in accordance with the exposure time per one frame that is specified by a system controller 131, conducts a photoelectric conversion, and outputs a pixel signal. The image-pickup device 114 passes the pixel signal to the AFE 116. The AFE 116 adjusts the level of the pixel signal in accordance with an amplification gain instructed by the system controller 131, A/D converts this pixel signal into digital data, and transmits the resulting signal to the main body unit 130 as pixel data. The camera unit 110 may be provided with a mechanical shutter and an iris diaphragm. When the mechanical shutter and the iris diaphragm are included, the system controller 131 is able to use them to adjust the amount of light to be made incident on the image-pickup device 114.
The main body unit 130 mainly includes the system controller 131, an image input IF 132, a working memory 133, a system memory 134, an image processor 135, a display output unit 136, an input/output IF 138, and a bus line 139. The image input IF 132 receives the pixel data from the camera unit 110 connected to the main body unit 130 via the cable and passes the data to the bus line 139.
The working memory 133 is composed of, for example, a volatile high-speed memory. The working memory 133 receives the pixel data from the AFE 116 via the image input IF 132, compiles the received pixel data into image data of one frame, and then stores the compiled image data. The working memory 133 passes the image data to the image processor 135 in a unit of frames. Further, the working memory 133 is used as appropriate as a temporary storage area even in the middle of image processing performed by the image processor 135.
The image processor 135 performs various kinds of image processing on the received image data, thereby generating image data in accordance with a predetermined format. When, for example, moving image data in a form of an MPEG file is generated, each frame image data is subjected to white balance processing, gamma processing and the like, and then the image data is subjected to intraframe and interframe compression processing. The image processor 135 sequentially generates the image data to be displayed from the image data that has been generated and passes the generated data to the display output unit 136.
The display output unit 136 converts the image data to be displayed received from the image processor 135 into an image signal that can be displayed on the display unit 160 and outputs the image signal. That is, the display output unit 136 functions as an output unit that outputs the image captured by the camera unit 110, which is the image-pickup unit, to the display unit 160, which is a display unit. When the main body unit 130 and the display unit 160 are connected to each other by an analog cable, the display output unit 136 D/A converts the image data to be displayed and outputs the image data after the conversion. When, for example, the main body unit 130 and the display unit 160 are connected to each other by an HDMI (registered trademark) cable, the display output unit 136 converts the image data to be displayed into a digital signal in an HDMI form and outputs the data after the conversion. Otherwise, the data may be transmitted using a transmission system such as Ethernet or a form such as LVDS without compressing images. The display unit 160 sequentially displays the image signals received from the display output unit 136.
A recognition processor 137 analyzes the received image data and recognizes, for example, a person, another vehicle, and a separatrix. The recognition processing is the existing processing such as, for example, edge detection processing and comparison with various recognition dictionaries.
The system memory 134 is composed of, for example, a non-volatile storage medium such as EEPROM (registered trademark). The system memory 134 stores and holds constant numbers, variable numbers, set values, programs and the like required for the operation of the image-pickup apparatus 100.
The input/output IF 138 is a connection interface with an external device. For example, the input/output IF 138 receives a signal from the external device and passes the received signal to the system controller 131, and receives a control signal such as a signal request for the external device from the system controller 131 and transmits the received signal to the external device. The blinker signal, the steering signal, the signal from the millimeter wave radar 11, and the signal from the navigation system 14 described above are input to the system controller 131 via the input/output IF 138. That is, the input/output IF 138 functions as a detection unit that detects that the own vehicle 10 will change course by acquiring information regarding the course change of the own vehicle 10 in collaboration with the system controller 131.
The system controller 131 directly or indirectly controls each of the components that compose the image-pickup apparatus 100. The control by the system controller 131 is achieved by a program or the like loaded from the system memory 134.
Next, an image-pickup control according to this embodiment will be explained.
A display angle of view 261 expressed as a range of an inner frame indicates an image area displayed on the display unit 160. When the display unit 160 can be replaced by the conventional rearview mirror as stated above, a display panel having a horizontally long aspect ratio like the conventional rearview mirror is employed. The display unit 160 displays the area that corresponds to the display angle of view 261 of the image generated from the output of the image-pickup device 114. In this embodiment, the image processor 135 cuts the display angle of view 261 out of the image generated by the image-pickup angle 214 to generate the image data to be displayed. The image displayed on the display unit 160 is in a mirror image relationship to the image captured by the camera unit 110 directed toward the rear side of the own vehicle 10. Therefore, the image processor 135 performs image processing of inverting the mirror image. In the following description, some scenes will be explained based on the processed mirror image to be displayed on the display unit 160 in order to facilitate understanding.
One exemplary scene shown in
In the normal state in which the own vehicle 10 goes straight along the center lane 900, on the premise that the driver observes the overall rear environment, the system controller 131 controls the camera unit 110 in such a way that the overall image to be acquired has a balanced brightness. Specifically, the system controller 131 generates one piece of image data by executing image-pickup processing by a predetermined image-pickup control value, and executes an AE operation using this image data.
The AE operation is, for example, an operation of calculating an average luminance value of the overall image from the luminance value of each area of the image that has been generated and determining the image-pickup control value such that the difference between the average luminance value and the target luminance value becomes 0. More specifically, the AE operation is an operation of converting the difference between the average luminance value that has been calculated and the target luminance value into a correction amount of the image-pickup control value by referring to, for example, a lookup table stored in the system memory 134, adding the resulting value to the previously used image-pickup control value, and determining the obtained value as the image-pickup control value for the next image-pickup processing. The image-pickup control value includes at least one of a charge accumulation time (it corresponds to the shutter speed) of the image-pickup device 114 and the amplification gain of the AFE 116. When the iris diaphragm is included, the F value of the optical system that may be adjusted by driving the iris diaphragm may be included.
When the average luminance value of the overall image is calculated, the luminance value of each area is multiplied by a weighting coefficient.
In this embodiment, as shown by the dotted lines in
The weighting coefficient in the normal state is applied not only in the case in which the own vehicle 10 travels along the center lane 900 but also in a case in which the own vehicle 10 travels along an arbitrary lane without changing lanes. Further, the weighting coefficient in the normal state is not limited to the example in which the weighting coefficients are all 1 like the aforementioned example. As another example of the weighting coefficient in the normal state, the weighting may be set in such a way that the weighting in the central part of the image-pickup angle 214 or the display angle of view 261 becomes larger. The central part here may mean the central part in the vertical direction and the lateral direction, or the central part in any one of the vertical direction and the lateral direction of the image-pickup angle 214 or the display angle of view 261.
Further, as another example of the weighting coefficient in the normal state, the weighting coefficient in the lower part of the image-pickup angle 214 or the display angle of view 261 may be set to be larger. The “lower part” here means, for example, the part lower than the central part in the vertical direction of the image-pickup angle 214 or the display angle of view 261 or the part lower than the boundary 922 between the sky 920 and the road. In the following description, the weighting coefficient in the normal state includes the above.
When the system controller 131 has detected the course change to the right direction via the input/output IF 138, the system controller 131 executes the setting of the window for the image that has been acquired up to the current time. The system controller 131 causes the recognition processor 137 to execute image processing such as edge enhancement or object recognition processing to extract the separatrixes 911, 912, and 913, and the boundary 922. Then the extracted lines are subjected to interpolation processing or the like, thereby determining the area of the right lane 901 to which the course will be changed, which is defined to be a weighting window 301. Further, the area of the left lane 902, which is the opposite of the right lane 901 with respect to the center lane 900, and the area on the left side of the left lane 902 are determined, and these areas are collectively defined to be a reduction window 303. The area other than the weighting window 301 and the reduction window 303 is defined to be a normal window 302.
After the system controller 131 defines the weighting window 301, the normal window 302, and the reduction window 303, a weighting coefficient that is larger than that applied in the normal state is given to the divided areas included in the weighting window 301, a weighting coefficient that is the same as that applied in the normal state is given to the areas included in the normal window 302, and a weighting coefficient that is smaller than that applied in the normal state is given to the reduction window 303. In the example shown in
When the weighting is applied as stated above, the influence of the area of the right lane 901 in which the weighting window 301 is set becomes relatively large, and the influence of the area on the left side including the left lane 902 in which the reduction window 303 is set becomes relatively small (in the example shown in
When the image-pickup control value is determined by the result of the AE operation in which weighting is applied as stated above, it is expected that the brightness of the subject included in the area of the right lane 901 in the image captured with this image-pickup control value will become appropriate. That is, while the subject included in the shade 925 is dark and hard to be visually recognized in the image in the normal state, it is possible to determine in which direction the driver wants to change lanes from various kinds of signals input to the input/output IF 138 and to optimize the brightness of the subject included in the area of the lane in the lane change direction. That is, when the driver changes the course, the camera unit 110 is controlled in such a way that the brightness of the partial area in this direction becomes appropriate, whereby it is possible to present the image that enables the driver to appropriately check the right lane 901, which is the lane after the course change.
While the information regarding the course change of the own vehicle 10 has been detected using the blinker signal, the steering signal, the signal from the millimeter wave radar 11, and the signal from the navigation system 14 in the aforementioned example, any one of these signals may be used or some of these signals may be combined with one another. Further, other signals related to the course change may instead be used. Furthermore, the system controller 131 may detect the information regarding the course change using means other than the input/output IF 138. When, for example, the change in the separatrix is detected from frame images continuously captured by the camera unit 110, the motion of the own vehicle 10 in the right or left direction can be detected. The result of this detection can be used as the information regarding the course change.
In the following description, some variations of the setting of the window will be explained.
As shown in
As shown in
As described above, by dynamically updating the weighting window 301, the driver is able to continuously observe the subject in the lane change direction at an appropriate brightness even during the lane change. While the area of the weighting window 301 set on the road surface is relatively fixed with respect to the own vehicle 10 in the aforementioned example, as long as the lane after the change is recognized by the separatrix, the lane area may be defined to be a fixed area of the weighting window 301. In this case, the lane area may be extracted for each frame since the lane area is relatively moved in the angle of view while the lane change is being performed.
Further, the system controller 131 may determine the end of the lane change from the change in the signal to be input to the input/output IF 138. For example, when the blinker signal is input, the timing when the reception of the blinker signal is stopped can be determined to be the end of the lane change. When the millimeter wave radar signal is input, the timing when the distance from the own vehicle 10 to the other vehicle 20 indicates a predetermined value can be determined to be the end of the lane change. Further, when the change in the separatrix is detected from the frame images continuously captured by the camera unit 110, the system controller 131 may determine the timing when the movement of the separatrix in the right or left direction is ended to be the end of the lane change.
While the some variations of the window settings have been explained with reference to
Next, one example of the control flow of the image-pickup apparatus 100 will be explained.
In Step S101, the system controller 131 sends the image-pickup control signal including the image-pickup control value to the camera unit 110, causes the camera unit 110 to capture images, and causes the camera unit 110 to transmit the pixel data to the main body unit 130. Then the process goes to Step S102, where the system controller 131 determines whether information indicating that the own vehicle 10 will start the course change has been acquired via the input/output IF 138 or the like.
When it is determined that the information indicating that the course change will start has not been acquired, the process goes to Step S121, where the system controller 131 causes the image processor 135 to process the pixel data acquired in Step S101 to form the display image, and performs the AE operation with weighting processing in which the weighting coefficient in the normal state is applied, thereby determining the image-pickup control value. Then the process goes to Step S122, where the system controller 131 sends image-pickup control information that includes the image-pickup control value determined based on the weighting coefficient in the normal state to the camera unit 110, causes the camera unit 110 to capture images, and causes the camera unit 110 to transmit the image data to the main body unit 130. When the main body unit 130 acquires the image data, the system controller 131 goes to Step S123, where the system controller 131 causes the image processor 135 to generate the display image and causes the display unit 160 to display the generated image via the display output unit 136. When it is determined in Step S102 that the information indicating that the course change will start has not been acquired, in place of the aforementioned AE operation with weighting processing in which the weighting coefficient in the normal state is applied, the AE operation without weighting described with reference to
When it is determined in Step S102 that the information indicating that the course change will start has been acquired, the process goes to Step S105, where the system controller 131 causes the image processor 135 to process the pixel data acquired in Step S101 and sets a window such as the weighting window. In this case, the weighting window is set in the area in which the course is changed, as described above.
Then the process goes to Step S106, where the system controller 131 determines whether there is a moving body such as another vehicle. The system controller 131 may determine the presence of the moving body using the millimeter wave radar signal, or may determine the presence of the moving body from a motion vector of the subject when it has already acquired images of a plurality of frames. When the millimeter wave radar signal is used, the system controller 131 functions as a detection unit that detects the moving body moving in the vicinity of the vehicle in collaboration with the input/output IF. In a similar way, when the motion vector is used, the system controller 131 functions as a detection unit in collaboration with the image processor 135. When the system controller 131 determines a moving body is present, the system controller 131 extracts the area of the moving body from the image and performs correction to add this area to the weighting window 301 (Step S107).
When the weighting window is corrected in Step S107 or when it is determined in Step S106 that there is no moving body, the process goes to Step S108, where the system controller 131 performs the AE operation with weighting, thereby determining the image-pickup control value. Then the process goes to Step S109, where the system controller 131 sends the image-pickup control signal including the image-pickup control value to the camera unit 110, causes the camera unit 110 to capture images, and causes the camera unit 110 to transmit the pixel data to the main body unit 130. When the main body unit 130 acquires the pixel data, the process goes to Step S110, where the system controller 131 causes the image processor 135 to process the acquired data to form the display image, and causes the display unit 160 to display the display image via the display output unit 136.
Then the process goes to Step S111, where the system controller 131 determines whether it has acquired the information indicating that the own vehicle 10 will end the course change via the input/output IF 138 or the like. When it is determined that it has not acquired the information indicating that the own vehicle 10 will end the course change, the process goes back to Step S105, where the processing at the time of the lane change is continued. The system controller 131 repeats Steps S105 to S111, thereby updating the display image substantially in a real time in accordance with a predetermined frame rate.
When it is determined in Step S111 that the information indicating that the own vehicle 10 will end the course change has been acquired, the process goes to Step S112, where the system controller 131 releases the window that has been set. Then the process goes to Step S113, where it is determined whether the display end instruction has been accepted. The display end instruction is, for example, another operation of the power switch. When it is determined that the display end instruction has not been accepted, the process goes back to Step S101. When it is determined that the display end instruction has been accepted, the series of processing is ended.
In the aforementioned processing, when it is determined that a moving body is present (YES in Step S106), a correction to add the area of the moving body to the weighting window 301 is executed (Step S107). This is an example of the window setting in consideration of the moving body described with reference to
When the system controller 131 has acquired, in Step S102, the information indicating that the course change will start, the process goes to Step S205, where it is determined whether there is a moving body such as another vehicle. When it is determined that there is no moving body, the process goes to Step S208, where the system controller 131 executes the AE operation with weighting processing in which the weighting coefficient in the normal state is applied and determines the image-pickup control value, similar to the processing from Steps S121 to S123. On the other hand, when it is determined that a moving body is present, the process goes to Step S206, where the system controller 131 extracts the area of the moving body from the image, and sets the weighting window in the area in the direction in which the course is changed in such a way as to include this area. Then the process goes to Step S209, where the system controller 131 performs the AE operation with weighting, thereby determining the image-pickup control value.
The system controller 131 sends the image-pickup control signal that includes the image-pickup control value determined in Step S207 or the image-pickup control value determined in Step S208 to the camera unit, causes the camera unit to capture images, and causes the camera unit to transmit the pixel data to the main body unit 130 (Step S209). When the main body unit 130 acquires the pixel data, the system controller 131 goes to Step S110.
According to the aforementioned control flow, when there is a moving body that needs to be particularly paid attention to at the time of the course change, the driver is able to visually recognize this moving body at an appropriate brightness. When there is no moving body that needs to be paid attention to, the driver is able to visually recognize the rear environment while prioritizing the overall brightness balance.
While the image processor 135 performs the AE operation with weighting on the overall image generated by the image-pickup angle 214 in the embodiment described above, the system controller 131 may first cut the display angle of view 261 out of the overall image and perform the operation on the image of the display angle of view 261. By performing the AE operation with weighting on the image of the display angle of view 261, even in a case in which there are subjects whose luminance levels are extremely high or low in the area of the image-pickup angle that has been removed, a more appropriate image-pickup control value can be determined without being affected by these subjects.
In the embodiment described above, the example in which the AE operation with weighting is performed in such a way that the weighting to be applied becomes larger in the course change direction in the image captured by the camera unit 110 that functions as the image-pickup unit based on the information regarding the course change detected by the input/output IF 138 that serves as the detection unit, and the camera unit 110 is controlled based on the result of the AE operation has been explained. However, the improvement in the visibility of the image can be achieved not only by the image-pickup control by the AE operation but also by image processing by the image processor 135.
As an example of improving the visibility by the image processing, first, an example in which the weighting is applied in such a way that the weighting becomes larger in the course change direction in the image captured by the camera unit 110 based on the information regarding the course change detected by the input/output IF 138 and the image processing of the brightness adjustment is performed will be explained.
After the system controller 131 causes the camera unit 110 to capture images and causes the camera unit 110 to transmit the pixel data to the main body unit 130 in Step S101, the system controller 131 determines in Step S202 whether it has acquired information indicating that the own vehicle 10 has started the course change or information indicating that the own vehicle 10 is continuing the course change via the input/output IF 138 or the like.
When the system controller 131 has determined that no information item has been acquired, the process goes to Step S203, where the image processor 135 executes normal brightness adjustment on the pixel data acquired in Step S101. The normal brightness adjustment is to perform brightness adjustment in which the weighting coefficient in the normal state is applied. Alternatively, in place of the brightness adjustment in which the weighting coefficient in the normal state is applied, as described above with reference to
When it is determined in Step S202 that the information indicating that the own vehicle 10 has started the course change or the information indicating that the own vehicle 10 is continuing the course change has been acquired, the system controller 131 sets the window such as the weighting window in Step S105. Further, the weighting window 301 is corrected in accordance with a condition (Steps S106 and S107). When the moving body is not taken into consideration, the processing of Steps S106 and S107 may be omitted.
When the process goes to Step S208, the image processor 135 executes the brightness adjustment with weighting on the pixel data acquired in Step S101. Specifically, as described with reference to
Next, the process goes to Step S211, where the system controller 131 determines whether it has acquired the information indicating that the own vehicle 10 will end the course change via the input/output IF 138 or the like. When it is determined that it has not acquired the information indicating that the own vehicle 10 will end the course change, the process goes back to Step S101. When it is determined that the own vehicle 10 has acquired the information indicating that the own vehicle 10 will end the course change, the process goes to Step S112.
As described above, even when the brightness is adjusted by the image processing, the driver is able to appropriately check the state of the lane after the course change during the course change.
As an example of improving the visibility by the image processing, next, an example in which the weighting is applied in such a way that the weighting becomes larger in the course change direction in the image captured by the camera unit 110 based on the information regarding the course change detected by the input/output IF 138 and the image processing of the white balance adjustment is performed will be explained.
When the system controller 131 has determined in Step S202 that no information item has been acquired, the process goes to Step S303, where the image processor 135 executes normal white balance adjustment on the pixel data acquired in Step S101. The normal white balance adjustment is to perform the white balance adjustment with weighting processing in which the weighting coefficient in the normal state is applied. Alternatively, in place of the white balance adjustment with weighting processing in which the weighting coefficient in the normal state is applied, as described with reference to
When it is determined in Step S202 that the information indicating that the own vehicle 10 has started the course change or the information indicating that the own vehicle 10 is continuing the course change has been acquired, the system controller 131 sets the window such as the weighting window in Step S105. Further, the weighting window 301 is corrected in accordance with a condition (Steps S106 and S107). When the moving body is not taken into consideration, the processing of Steps S106 and S107 may be omitted.
When the process goes to Step S208, the image processor 135 executes the white balance adjustment with weighting on the pixel data acquired in Step S101. Specifically, as described above with reference to
As described above, by adjusting the white balance, the driver is able to visually correctly recognize the color of the object after the course change during the course change.
The brightness adjustment and the white balance adjustment by the image processor 135 described above with reference to
Furthermore, the image-pickup control based on the result of the AE operation with the weighting described with reference to
The image-pickup apparatus 100 according to this embodiment described above has been described as being an apparatus that includes the camera unit 110 directed toward the rear side of the own vehicle 10 and supplies a rear image to the display unit 160 that can be replaced by the rearview mirror. However, the present disclosure may be applied also to an image-pickup apparatus that includes the camera unit 110 directed toward the front side of the own vehicle 10. For example, a camera unit that captures the area in the front of a large vehicle, which becomes a blind area from the driver's seat in the large vehicle, will improve the convenience for the driver when the subject in the course change direction, including a course change such as a right turn or a left turn, is displayed at an appropriate brightness.
While the images described above have been described as the images successively displayed on the display unit 160 after the processing of the images periodically captured by the camera unit 110, the images may be, for example, still images or moving images to be recorded captured at a predetermined timing or in accordance with a timing of an event that has occurred.
The operations, procedures, steps, and stages of each process performed by an apparatus, system, program, and method shown in the embodiment described above can be performed in any order as long as the order is not indicated by “prior to”, “before,” or the like and as long as the output from a previous process is not used in a later process. Even if the process flow is described using phrases such as “first” or “next” for the sake of convenience, it does not necessarily mean that the process must be performed in this order.
As described above, the image-pickup apparatus, the image-pickup display method, and the image-pickup display program described in this embodiment can be used as, for example, an image-pickup apparatus mounted on an automobile, an image-pickup display method executed in the automobile, and an image-pickup display program executed by a computer of the automobile.
Number | Date | Country | Kind |
---|---|---|---|
2016-103392 | May 2016 | JP | national |
2017-015157 | Jan 2017 | JP | national |
The present application is a Continuation of International Application No. PCT/JP2017/009362, filed on Mar. 9, 2017, which is based upon and claims the benefit of priority from Japanese Patent Application No. 2016-103392, filed on May 24, 2016, Japanese Patent Application No. 2017-015157, filed on Jan. 31, 2017, the entire contents of which are hereby incorporated by reference.
Number | Date | Country | |
---|---|---|---|
Parent | PCT/JP2017/009362 | Mar 2017 | US |
Child | 16184837 | US |