The present disclosure relates to an imaging system that captures an image of a predetermined region, and generates image data used for image analysis. Furthermore, the present disclosure relates to a moving body control system including the imaging system.
Recently, in an automotive field, there has been a widely-used automotive vehicle that captures an image ahead of the vehicle by using a camera, and recognizes a traffic lane on which the vehicle is driving, a vehicle ahead of the vehicle, a person, an obstacle, or other objects, based on the captured image, to control action (a speed or braking) of the vehicle. Therefore, various in-vehicle cameras mounted on vehicles have been developed (refer to, for example, Unexamined Japanese Patent Publication No. 2017-046051 and Unexamined Japanese Patent Publication No. 2017-017480).
To accurately recognize, based on a captured image, other vehicles, persons, obstacles, or other objects ahead of a vehicle, a captured image with high definition is demanded.
The present disclosure provides an imaging system that provides image data with high definition suitable for image recognition. Furthermore, the present disclosure provides a moving body control system provided with such an imaging system.
A first aspect of the present disclosure provides the imaging system. The imaging system includes a first imaging device including a first optical system having a first view angle and a first imaging element that captures a first subject image formed through the first optical system to generate first image data, and a second imaging device including a second optical system having a second view angle that is wider than the first view angle and a second imaging element that captures a second subject image formed through the second optical system to generate second image data. When a number of pixels that capture a subject image involved in a unit view angle is defined as a resolution, the second optical system is configured to form an image including a first region and a second region, which do not overlap each other, such that a resolution in the second region is higher than a resolution in the first region corresponding to an imaging range with the first view angle, in an imaging surface of the second subject image.
A second aspect according to the present disclosure provides a moving body control system that controls action of a moving body based on a captured image. The moving body control system includes the imaging system according to the first aspect and a control device that controls the action of the moving body based on information analyzed by the imaging system.
According to the imaging system in the present disclosure, image data with high definition used for image analysis can be generated. According to the moving body control system of the present disclosure, the action of the moving body is controlled based on an analysis result of the image data with high definition, whereby accurate control according to a surrounding condition can be achieved.
Hereinafter, exemplary embodiments will be described in detail with reference to the drawings as appropriate. However, descriptions in more detail than necessary may be omitted. For example, a detailed description of well-known matters and a duplicate description of substantially identical configurations may be omitted. Such omissions are made in order to avoid unnecessary redundancy of the following description and to facilitate understanding of those skilled in the art.
Here, the inventors of the present disclosure provide the accompanying drawings and the following description such that those skilled in the art can sufficiently understand the present disclosure, and therefore, they do not intend to restrict the subject matters of claims by the accompanying drawings and the following description.
Imaging system 100 includes first imaging device 10a, second imaging device 10b, and third imaging device 10c, which respectively capture images of a scene ahead of the vehicle and generate image data, and image analysis device 20 that analyzes the image data generated by first imaging device 10a to third imaging device 10c. First imaging device 10a to third imaging device 10c are disposed at a front part of the vehicle. The front part of the vehicle is a front bumper, for example. First imaging device 10a to third imaging device 10c are disposed such that respective optical axes of those imaging devices substantially coincide with each other in a horizontal direction.
First imaging device 10a to third imaging device 10c respectively have view angles W1 to W3 that are different from each other, as illustrated in
Image analysis device 20 receives the captured images captured by three imaging devices 10a to 10c. Further, image analysis device 20 analyzes the captured images that have been received, and detects at least one of a vehicle, a person, a bicycle, a traffic lane, a traffic sign, an obstacle, or the like ahead of the vehicle. Hereafter, those objects are referred to as “detection targets”.
First interface 23a to third interface 23c receive pieces of image data from first imaging device 10a to third imaging device 10c, respectively. Image processing circuit 21 performs, on each piece of received image data, analysis processing for detecting a predetermined object. Fourth interface 25 transmits the analysis result to control device 30. Data storage 29 stores a program to be executed by image processing circuit 21 and the received image data, for example. Image processing circuit 21 includes a central processing unit (CPU). Image processing circuit 21 executes the program stored in data storage 29 to achieve a function described below. Image processing circuit 21 may include a dedicated hardware circuit designed so as to achieve the function described below. In other words, image processing circuit 21 may include the CPU, a micro processing unit (MPU), a field-programmable gate array (FPGA), a digital signal processor (DSP), or an application specific integrated circuit (ASIC), for example. Data storage 29 is configured with a hard disk drive (HDD), a solid state drive (SSD), a nonvolatile memory, or random access memory (RAM), for example.
First imaging device 10a includes optical system 122a, imaging element 121a, signal processing circuit 131a, and interface 133a. Imaging element 121a captures a subject image generated by receiving light through optical system 122a and generates an image signal. Signal processing circuit 131a performs predetermined image processing (for example, gamma correction and distortion correction) on the image signal. Interface 133a is a circuit for outputting the image signal that is signal-processed by signal processing circuit 131a to an external apparatus.
Second imaging device 10b includes optical system 122b, imaging element 121b, signal processing circuit 131b, and interface 133b. Imaging element 121b captures a subject image generated by receiving light through optical system 122b and generates an image signal. Signal processing circuit 131b performs predetermined image processing (for example, gamma correction and distortion correction) on the image signal. Interface 133b is a circuit for outputting the image signal that is signal-processed by signal processing circuit 131b to the external apparatus.
Third imaging device 10c includes optical system 122c, imaging element 121c, signal processing circuit 131c, and interface 133c. Imaging element 121c captures a subject image generated by receiving light through optical system 122c and generates an image signal. Signal processing circuit 131c performs predetermined image processing (for example, gamma correction and distortion correction) on the image signal. Interface 133c is a circuit for outputting the image signal that is signal-processed by signal processing circuit 131c to the external apparatus.
Imaging elements 121a to 121c are charge coupled device (CCD) or complementary metal oxide semiconductor (CMOS) image sensors, for example. With respect to each generated image, its aspect ratio is 16:9, and a number of pixels is 1920×1080, for example.
Optical system 122a in first imaging device 10a is designed so as to form a subject image of a region with the view angle being 35 degrees (that is, region A1). Optical system 122b in second imaging device 10b is designed so as to form a subject image of a region with the view angle being 50 degrees (that is, region A2a, region A1, and region A2b). Optical system 122c in third imaging device 10c is designed so as to form a subject image of a region with the view angle being 120 degrees (that is, region A3a, region A2a, region A1, region A1b, and region A3b).
Optical system 122a of first imaging device 10a is designed so as to obtain a uniform “resolution” over an entire region of image 50a (first image) to be formed. The “resolution” herein corresponds to a number of pixels in imaging elements 121a to 121c used to capture images with a unit view angle formed on imaging elements 121a to 121c through optical systems 122a to 122c (a detailed description will be made later). In contrast, each of optical system 122b in second imaging device 10b and optical system 122c in third imaging device 10c is designed such that, in image 50b (second image) and image 50c (third image) to be formed, a resolution (or magnification ratio) of a region overlapping a range of a view angle of another optical system (hereafter, referred to as a “view angle overlapping region”) is lower than a resolution (or magnification ratio) of a region different from the view angle overlapping region. In this exemplary embodiment, the region different from the view angle overlapping region is, for example, a region other than the view angle overlapping region in each imaging surface.
For example, a range with the view angle being 35 degrees in optical system 122b in second imaging device 10b overlaps the region of optical system 122a in first imaging device 10a (refer to
Similarly, a range with the view angle being 50 degrees in optical system 122c in third imaging device 10c overlaps the region of optical system 122b in second imaging device 10b (refer to
In this manner, optical system 122b forms one image including regions of different resolutions. Further, optical system 122c also forms one image including regions of different resolutions. Configurations of such optical system 122b in second imaging device 10b and optical system 122c in third imaging device 10c will be described below.
Optical systems 122a to 122c are devices for forming images on imaging surfaces of imaging elements 121a to 121c, respectively. Each of optical systems 122a to 122c includes a lens, a diaphragm, and a filter, for example. In particular, optical systems 122b and 122c each include free-form surface lenses.
The free-form surface lens is a lens in which a surface for refracting light to form an image has a non-arc shape and is not rotation symmetry. Note that a cylindrical lens is one type of an arc-shaped lens, which is different from the free-form surface lens. The free-form surface lens has the non-arc shape that is not a part of a perfect circle. Examples of a material of the free-form surface lens include, but not particularly limited to, glass and resin. Examples of a method of manufacturing the free-form surface lens include, but not particularly limited to, a method of molding the free-form surface lens by using a mold such as a metal mold.
A set of free-form surface lenses 123 and 124 of this exemplary embodiment constitutes a lens that can cause a magnification ratio in an image to be formed to vary depending on a view angle. In this exemplary embodiment, free-form surface lenses 123 and 124 are particularly designed such that, in an entire image to be formed on an image surface, a magnification ratio of outer peripheral regions of a region with a predetermined range (that is, a predetermined view angle) including a center (that is, an optical axis) is larger than a magnification ratio of the region with the predetermined range. In other words, as illustrated in part (B) of
Optical system 122b in second imaging device 10b has been described above, but optical system 122c in third imaging device 10c has the same configuration. In other words, optical system 122c also includes the free-form surface lenses. Further, optical system 122c is also designed so as to cause its resolution to vary depending on the view angle.
Part (A) of
In parts (A), (B) of
In contrast, with reference to part (B) of
Similarly, with reference to part (C) of
Optical system 122b is designed so as to have an optical characteristic described above. Therefore, as illustrated in part (B) of
The “resolution” herein is defined as a number of pixels in imaging elements 121a to 121c used to capture images in a unit view angle formed on imaging elements 121a to 121c through optical systems 122a to 122c (refer to Formula (1) below).
Resolution=number of pixels required to capture image with predetermined view angle/predetermined view angle (1)
With reference to
As described above, optical system 122b is designed such that magnification ratio (M2) of regions R21, R22 on the outer sides of region R20 (view angle overlapping region) is set larger than magnification ratio (M1) of region R20 at the center part, as illustrated in part (B) of
Accordingly, a “resolution” of the image for second region r2 (=N2/θ) (a number of pixels per unit view angle) is larger (denser) than a “resolution” of the image for first region r1 (=N1/θ).
Note that, an expression of different resolutions in this exemplary embodiment means a difference in resolutions, which is produced by a combination of an optical system configured mainly with a spherical lens and a planer imaging element.
As described above, the magnification ratio of each of optical systems 122b, 122c of this exemplary embodiment varies depending on the view angle. As a result, the resolution of each of images formed on imaging surfaces of imaging elements 121b, 121c varies depending on the view angle (that is, the region in the image). For example, as illustrated in part (B) of
An operation of imaging system 100 configured as described above will be described below.
Imaging system 100 in
Image processing circuit 21 in image analysis device 20 performs image analysis on the image data received from imaging devices 10a to 10c, and detects a detection target ahead of vehicle 200. Examples of the detection target include a vehicle, a person, a bicycle, a traffic lane, a traffic sign, and an obstacle. Herein, for an image received from first imaging device 10a, an entire region of the image is used for the image analysis. On the other hand, for images respectively received from second and third imaging devices 10b, 10c, entire regions of the images are not used for the image analysis, but only partial regions are used for the image analysis.
More specifically, when the image analysis is performed on an image of region A1 of a view field whose view angle is 35 degrees, image processing circuit 21 performs the image analysis on entire region R10 of first image 50a in
Further, when the image analysis is performed on images of regions A2a and A2b of a view field whose view angle is not less than 35 degrees and not more than 50 degrees, image processing circuit 21 performs the image analysis on partial regions R21, R22 of second image 50b in
Further, when the image analysis is performed on images of regions A3a and A3b of a view field whose view angle is not less than 50 degrees, image processing circuit 21 performs the image analysis on partial regions R31, R32 of third image 50c in
From first image 50a indicating the view field whose view angle is 35 degrees, a detection target located at a comparatively remote place (for example, 250 m ahead) is detected. Further, from third image 50c indicating the view field whose view angle is from 50 degrees to 120 degrees inclusive, a detection target located at a comparatively near place (for example, 60 m ahead) is detected. From second image 50b indicating the view field whose view angle is from 35 degrees to 50 degrees, a detection target located with a middle distance (for example, 150 m ahead) is detected.
With respect to the second image, the detection rate in region R20 corresponding to the view angle ranging from 0 degree to 35 degrees inclusive is 0.78, which is lower than the detection rate for the first image. On the other hand, the detection rate in regions R21, R22 corresponding to the view angle ranging from 35 degrees to 50 degrees is 1.5, which is a good detection rate.
Similarly, with respect to the third image, the detection rate in region R30 corresponding to the view angle ranging from 0 degree to 50 degrees is 0.72, which is lower than the detection rate for the first image. On the other hand, the detection rate in regions R31, R32 corresponding to the view angle ranging from 50 degrees to 120 degrees inclusive is 1.2, which is a good detection rate.
Image processing circuit 21 transmits the detection result of the detection target to control device 30. Control device 30 uses the detection result to determine a traffic condition ahead of the vehicle. Control device 30 controls action of control target 40 in the vehicle based on the detection result. Examples of control target 40 include a brake, an engine, a light, a speaker, a display, and a vibrator. Control target 40 may be a combination of those components. In other words, control device 30, for example, brakes the vehicle, controls a rotation speed of the engine, or turns on or off the light, according to the detection result. Furthermore, control device 30 outputs an alarm or a message from the speaker, displays the message on the display, or vibrates a seat or a steering wheel.
In this manner, vehicle 200 captures images ahead of the vehicle by using imaging system 100, analyzes a traffic situation ahead of the vehicle based on the captured images, and can control the action of vehicle 200 based on the analysis result.
As described above, imaging system 100 according to this exemplary embodiment includes first imaging device 10a and second imaging device 10b. First imaging device 10a includes optical system 122a (an example of a first optical system) for forming a subject image with a first view angle (for example, 35 degrees), and imaging element 121a (an example of a first imaging element) that captures the subject image formed through optical system 122a and generates first image data. Second imaging device 10b includes optical system 122b (an example of a second optical system) for forming a subject image with a second view angle (for example, 50 degrees) that is wider than the first view angle, and imaging element 121b (an example of a second imaging element) that captures the subject image formed through optical system 122b and generates second image data.
When a number of pixels that capture a subject image involved in a unit view angle is defined as a resolution, optical system 122b forms an image including region R20 (an example of a first region) and regions R21, R22 (an example of a second region), which do not overlap each other, such that a resolution in regions R21, R22 is higher than a resolution in region R20 corresponding to an imaging range of the first view angle, in an imaging surface of the subject image. Note that, in this exemplary embodiment, regions R21, R22 are regions corresponding to an imaging range with a view angle wider than the first view angle, for example.
Imaging system 100 may further include third imaging device 10c. Third imaging device 10c includes optical system 122c (an example of a third optical system) for forming a subject image with a third view angle (for example, 120 degrees) wider than the second view angle, and imaging element 121c (an example of a third imaging element) that captures the subject image formed through optical system 122c and generates third image data. In this case, optical system 122c forms an image including region R30 (an example of a third region) and regions R31, R32 (an example of a fourth region), which do not overlap each other, such that a resolution in regions R31, R32 is higher than a resolution in region R30 corresponding to an imaging range with the second view angle, in an imaging surface of the subject image formed through optical system 122c. Note that, in this exemplary embodiment, for example, regions R31, R32 are regions corresponding to an imaging range with a view angle wider than the second view angle.
With this configuration, in optical system 122b and optical system 122c, a resolution of an image in a region whose view angle does not overlap a smaller view angle of another optical system is set higher than a resolution of an image in a region whose view angle overlaps the smaller view angle of the other optical system. Accordingly, the image in the region whose view angle does not overlap the smaller view angle of the other optical system can be captured with high definition. Therefore, pixels in imaging elements 121b, 121c can efficiently be used, and thus sensing performance can be improved. In vehicle 200, the high-definition image is used for image analysis. This enables accurate detection of a detection target (for example, another vehicle, a person, or a traffic lane) and accurate control of action of vehicle 200 according to a road condition.
Optical system 122b and optical system 122c each include free-form surface lenses 123, 124. Use of the free-form surface lenses enables freely designing the magnification ratio that varies according to the view angle.
The first exemplary embodiment has been described above as an example of the technique disclosed in the present application. However, the technique in the present disclosure is not limited to this and can also be applied to exemplary embodiments having undergone changes, replacements, additions, omissions, and the like as appropriate. In addition, new exemplary embodiments can be implemented by combining the respective constituent elements described above in the first exemplary embodiment. Hence, other exemplary embodiments will be described below.
In the above exemplary embodiment, a number of imaging devices, namely, optical systems is set to “3”, but is not limited thereto. The idea of the present disclosure can be applied to any imaging system, as long as the imaging system includes a plurality (at least two) of imaging devices, namely, optical systems, which capture images of a view field in an identical direction with different view angles. For example, when one imaging device having a wider view angle and another imaging device having narrower view angle are provided, an optical system of the one imaging device may be designed such that a resolution in a region other than a view angle overlapping region is higher than a resolution in the view angle overlapping region, in an image formed by the optical system of the one imaging device.
The view angles illustrated in the above exemplary embodiment are examples, and the view angles are not limited to 35 degrees, 50 degrees, and 120 degrees.
In the above exemplary embodiment, imaging devices 10a to 10c perform the gamma correction and the distortion correction, but image analysis device 20 may perform the gamma correction and the distortion correction. Alternatively, imaging devices 10a to 10c may perform the gamma correction, and image analysis device 20 may perform the distortion correction.
In the above exemplary embodiment, a resolution of the first image formed by first imaging device 10a is set uniform, but optical system 122a may be designed such that the resolution varies according to a view angle. Furthermore, optical system 122b may be designed such that the resolution of the image in the region whose view angle is not less than 35 degrees is not uniform but varies according to the view angle, in the image formed by optical system 122b in second imaging device 10b. This is also applied to optical system 122c in third imaging device 10c.
In the above exemplary embodiment, as illustrated in parts (B) and (C) of
In the above exemplary embodiment, imaging devices 10a to 10c are disposed so as to capture the images of the scene ahead of the vehicle, but imaging devices 10a to 10c may be disposed so as to capture an image of a rear scene or a side scene of the vehicle.
In the above exemplary embodiment, the example in which the control system including imaging system 100 is applied to the automobile has been described. However this control system may be applied to another moving body (for example, any one of a train, a vessel, an airplane, a robot, a robot arm, and a drone) in addition to the automobile. In this case, the control system controls action of the other moving body according to the analysis result of imaging system 100. More specifically, the action of the other moving body is action of any one of a motor, an actuator, a display, a light-emitting diode (LED), and a speaker, for example. Alternatively, imaging system 100 or imaging devices 10a to 10c may be applied to a surveillance camera.
In the above exemplary embodiment, instead of the free-form surface lens used in the optical system, another type of lens (for example, a panomorph lens made by ImmerVision, Inc.) may be used, as long as the lens is a lens whose magnification ratio (that is, a resolution) can freely be designed according to a view angle.
The above exemplary embodiment discloses configurations of an imaging system and a moving body control system that will be described below.
(1) An imaging system including a configuration described below
Imaging system (100) includes first imaging device (10a) and second imaging device (lob). First imaging device (10a) includes first optical system (122a) having a first view angle (35 degrees), and first imaging element (121a) that captures a first subject image formed through the first optical system and generates first image data. Second imaging device (10b) includes second optical system (122b) having a second view angle (50 degrees) that is wider than the first view angle, and second imaging element (121b) that captures a second subject image formed through the second optical system and generates second image data. When a number of pixels that capture a subject image involved in a unit view angle is defined as a resolution, second optical system (122b) is configured to form an image including first region (R20) and second regions (R21, R22), which do not overlap each other, such that a resolution in second regions (R21, R22) is higher than a resolution in the first region corresponding to an imaging range of the first view angle, in an imaging surface of the second subject image. This imaging system can efficiently use pixels of the imaging elements and can improve sensing performance.
(2) The imaging system in (1) may further include image analysis device (20) that analyzes the pieces of image data generated by the respective imaging devices. Image analysis device (20) obtains information of a subject present in imaging range (A1) with a first view angle by analyzing first image data (50a), and obtains information of a subject present inside an imaging range with the second view angle and outside the imaging range with the first view angle (A2a, A2b), by analyzing second image data (50b).
(3) The imaging system in (1) or (2) may further include third imaging device (10c). Third imaging device (10c) includes third optical system (122c) having a third view angle (120 degrees) that is wider than the second view angle, and third imaging element (121c) that captures a third subject image formed through the third optical system and generates third image data. The third optical system is configured to form an image including third region (R30) and fourth regions (R31, R32), which do not overlap each other, such that a resolution in fourth regions (R31, R32) is higher than a resolution in third region (R30) corresponding to an imaging range with the second view angle, in an imaging surface of the third subject image.
(4) In the imaging system in (1) or (2), second and third optical systems (122b, 122c) may each include free-form surface lenses (123, 124).
(5) The above exemplary embodiment discloses a moving body control system having a configuration described below.
The moving body control system includes imaging system (100) in (2) described above and control device (30) that controls action of moving body (200) based on information analyzed by the imaging system. According to the moving body control system, the action of the moving body is controlled based on an analysis result of the image data with high definition, whereby accurate control according to a surrounding condition can be achieved.
(6) In moving body control system in (5), the imaging system may further include third imaging device (10c). Third imaging device (10c) includes third optical system (122c) having a third view angle that is wider than the second view angle, and third imaging element (121c) that captures a subject image formed through the third optical system and generates third image data. The third optical system is configured to form an image including a third region and a fourth region, which do not overlap each other, such that a resolution in the fourth region is higher than a resolution in the third region corresponding to an imaging range with the second view angle, in an imaging surface of the third subject image. The control device may further obtain information of a subject present inside an imaging range with the third view angle and outside the imaging range with the second view angle (A3a, A3b), by analyzing third image data (50c).
(7) In the moving body control system in (5) or (6), the respective imaging devices may be disposed to the moving body such that positions of optical axes of the respective imaging devices in a horizontal direction substantially coincide with each other.
(8) In the moving body control system in (5) or (6), the respective imaging devices may be attached at a front part of the moving body so as to capture images ahead of the moving body.
(9) In the moving body control system in (5) or (6), the moving body may be any one of an automobile, a train, a vessel, an airplane, a robot, a robot arm, and a drone.
As described above, the exemplary embodiments have been described as an example of a technique according to the present disclosure. The accompanying drawings and the detailed description have been provided for this purpose.
Therefore, the components described in the accompanying drawings and the detailed description include not only the components essential for solving the problem but also components that are not essential for solving the problem in order to illustrate the technique. For this reason, even if these unessential components are described in the accompanying drawings and the detailed description, these unessential components should not be immediately approved as being essential.
Further, since the above-described exemplary embodiments illustrate the technique in the present disclosure, various modifications, substitutions, additions, and omissions can be made within the scope of claims and equivalent scope of claims.
An imaging system according to the present disclosure can provide image data with high definition used for image analysis, and is useful for a control system (for example, a control system of a vehicle) that controls a control target based on a result of the image analysis.
Number | Date | Country | Kind |
---|---|---|---|
2017-097536 | May 2017 | JP | national |