The present invention relates to an image processing method and an image processing device for detecting a foreground from an input image.
A method called background subtraction is known to extract target objects from an image. The background subtraction is a method of extracting target objects that do not exist in a background image by comparing the previously acquired background image with the observed image. The region occupied by the object that does not exist in the background image (the region occupied by the target object) is called the foreground region, and the other region is called the background region.
Patent literature 1 describes an object detection device that uses background differences to detect the state of the foreground (target object) relative to the background (background object). Specifically, as shown in
The state determination unit 53 calculates a difference between the background depth map and the foreground depth map. Then, the state determination unit 53 detects a state of the foreground based on the difference.
When a visible light camera is used in the ranging unit 52, a shadow of an object or reflected light from a background surface such as a floor may cause a false detection of the target object. However, by using a near infrared light camera in the ranging unit 52, influence of shadows of an object and the like is reduced.
However, near infrared light is also contained in sunlight. Therefore, an object detection device using a near infrared light camera (near infrared camera) cannot measure distances accurately due to influence of sunlight. In other words, an object detection device such as it described in patent literature 1 are not suitable for outdoor use.
Non-patent literature 1 describes an image processing device that uses a solar spectrum model. Specifically, as shown in
The solar spectrum calculation unit 63 calculates the solar spectrum using the date and time input from the date and time specification unit 61 and the position input from the position specification unit 62 by using a sunlight model. The solar spectrum calculation unit 63 outputs the signal including the solar spectrum to the estimated-background calculation unit 64.
The estimated-background calculation unit 64 also receives a signal (input image signal) Vin including an input image (RGB image) captured outdoors. The estimated-background calculation unit 64 calculates an estimated background using the color information of the input image and the solar spectrum. The estimated background refers to the image that is predicted to be closest to the actual background. The estimated-background calculation unit 64 outputs the estimated background to the estimated-background output unit 65. The estimated-background output unit 65 may output the estimated background as it is as Vout, or it may output foreground likelihood.
When outputting the foreground likelihood, the estimated-background output unit 65 obtains the foreground likelihood based on a difference between the estimated background and the input image signal, for example.
The image processing device 60 can obtain the estimated background or foreground likelihood from an input image captured outdoors. However, it is difficult for the image processing device 60 to obtain the foreground likelihood from an input image captured indoors. This is because the illumination light spectrum is unknown, although it is possible to calculate the indoor illumination light spectrum instead of calculating the solar spectrum when the image processing device 60 is used indoors.
Patent literature 1: Japanese Patent Laid-Open No. 2017-125764 Non-Patent Literature
Non-Patent literature 1: A. Sato, et al., “Foreground Detection Robust Against Cast Shadows in Outdoor Daytime Environment”, ICIAP 2015, Part II, LNCS 9280, pp. 653-664, 2015
As explained above, there are technologies for detecting the foreground with high accuracy in indoor environment and for detecting the foreground with high accuracy in outdoor environment, separately. However, the devices described in patent literature 1 and non-patent literature 1 cannot accurately detect the foreground in both indoor and outdoor environments.
It is an object of the present invention to provide an image processing method and an image processing device that can detect the foreground without being affected by reflected light from the shadow of an object or the background and so on, in both indoor and outdoor environments.
An image processing method according to the present invention includes generating first foreground likelihood from a visible light image, generating second foreground likelihood from a depth image in which the same object is captured as that in the visible light image, generating reliability of the depth image using at least the visible light image and the depth image, and determining foreground likelihood of the object based on the first foreground likelihood and the second foreground likelihood, using the reliability of the depth image as a weight.
An image processing device according to the present invention includes first likelihood generation means for generating first foreground likelihood from a visible light image, second likelihood generation means for generating second foreground likelihood from a depth image in which the same object is captured as that in the visible light image, depth reliability generation means for generating reliability of the depth image using at least the visible light image and the depth image, and foreground detection means for determining foreground likelihood of the object based on the first foreground likelihood and the second foreground likelihood, using the reliability of the depth image as a weight.
An image processing program according to the present invention causes a computer to execute a process of generating first foreground likelihood from a visible light image, a process of generating second foreground likelihood from a depth image in which the same object is captured as that in the visible light image, a process of generating reliability of the depth image using at least the visible light image and the depth image, and a process of determining foreground likelihood of the object based on the first foreground likelihood and the second foreground likelihood, using the reliability of the depth image as a weight.
According to this invention, the foreground can be detected in both indoor and outdoor environments without being affected by shadows of objects or reflected light from the background.
Hereinafter, example embodiments of the present invention will be described with reference to the drawings.
The visible light foreground likelihood generation unit 11 generates foreground likelihood of a visible light image for each predetermined region in the frame from at least a frame of a visible light image. The depth foreground likelihood generation unit 12 generates foreground likelihood of a depth image for each predetermined region in the frame from at least a depth image (an image in which the depth value (distance) is expressed in light and shade) of the frame. The depth reliability generation unit 13 generates a depth image reliability for each predetermined region from at least a frame of the depth image. The foreground detection unit 14 detects the foreground from which influence of shadows of an object and reflection from the object is excluded based on the foreground likelihood of the visible light image, the foreground likelihood of the depth image and the depth image reliability.
In this example embodiment, a visible light image is obtained by general visible light image acquisition means (for example, visible light camera 41). The depth image (distance image) is obtained by distance image acquisition means (for example, depth camera 42), such as a ToF (Time of Flight) camera that uses near infrared light. However, devices for obtaining the visible light image and the depth image are not limited to those. For example, a ToF camera that also has a function to obtain a visible light image may be used.
The image processing device 10 may input a visible light image that is stored in a memory unit (not shown) in advance. The image processing device 10 may also input a depth image that is stored in a memory unit (not shown) in advance.
The observed value gradient calculation unit 131 calculates gradient of the observed value for each small region in the depth image in which the same object is captured as that in the visible light image. The size of the small region is arbitrary. For example, the size of a small region is 5×5 pixels. The distance measurement impossible pixel determination unit 132 determines whether each pixel in the depth image is distance measurement impossible (range impossible: distance cannot be obtained) for each small region. The first edge detection unit 133 detects the edges in the depth image for each small region. The second edge detection unit 134 detects edges in visible light image for each small region. The depth reliability determination unit 136 determines depth image reliability using the gradient of the observed values, the distance measurement impossible pixels, the edges in the depth image and the edges in the visible light image.
In this example embodiment, the depth reliability determination unit 136 uses information regarding the gradient of the observed values, the distance measurement impossible pixels, the edges in the depth image and the edges in the visible light image, but the depth reliability determination unit 136 may use some of that information. The depth reliability determination unit 136 may also use other information in addition to the information.
Next, the operation of the image processing device 10 will be explained with reference to the flowchart in
The visible light foreground likelihood generation unit 11 generates foreground likelihood of the visible light image using a solar spectrum model (step S11). The visible light foreground likelihood generation unit 11 can generate the foreground likelihood in various ways. For example, the visible light foreground likelihood generation unit 11 uses the method described in the non-patent literature 1.
The visible light foreground likelihood generation unit 11 first calculates spectrum of solar light (direct light and ambient light) at the shooting position and shooting time of the camera. The visible light foreground likelihood generation unit 11 converts the spectrum into color information. The color information is, for example, information of each channel in the RGB color space. The color information is expressed as in equation (1).
[Math. 1]
direct light:Idc ambient light:IBC (1)[Math. 1]
The pixel values (for example, RGB values) of the direct light and the ambient light are expressed as follows. In equation (2), p, q, and r are coefficients that represent the intensity of the direct light or the ambient light. Hereinafter, pixel values are assumed to be RGB values in the RGB color space. In that case, the superscript c in equations (1) and (2) represents one of R-value, G-value, or B-value.
[Math. 2]
direct light:Ldc=p·Idc ambient light:Lsc=q·Idc+r·Isc (2)
The visible light foreground likelihood generation unit 11 calculates an estimated background from the input visible light image (in this example, RGB image) and the solar spectrum. Assuming that the RGB value of the background in the visible light image is B, the estimated background can be expressed as follows.
In equation (3), m=(p+q)/1 and n=q/1. When the RGB value of the input visible light image is Ci, the visible light foreground likelihood generation unit 11 obtains m and n that minimize the difference between Ci and Bc. The visible light foreground likelihood generation unit 11 substitutes the obtained m and n into the equation (3) to obtain the RGB values of the estimated background image.
Then, the visible light foreground likelihood generation unit 11 regards the difference between normalized RGB values Ci of the visible light image and normalized RGB values of the estimated background image as the foreground likelihood. The visible light foreground likelihood generation unit 11 may use a value that has been processed in some way for the difference as the foreground likelihood.
The depth foreground likelihood generation unit 12 generates the foreground likelihood (foreground likelihood of the depth image) for each pixel in the depth image (step S12).
shows an explanatory diagram of a foreground likelihood generating method. The depth foreground likelihood generation unit 12 creates a histogram of pixel values (luminance values) for each pixel in the depth images of multiple frames in the past, in order to generate the foreground likelihood of a depth image. Since the background is stationary, positions where similar pixel values appear over multiple frames are likely to be included in the background. Since the foreground may move, positions where pixel values vary over multiple frames are likely to be included in the foreground.
The depth foreground likelihood generation unit 12 approximates the histogram of pixel values with a Gaussian or mixture d Gaussian distribution, and derives the foreground likelihood from the Gaussian or mixture Gaussian distribution.
It is noted that such generation of a foreground likelihood is just one example, and the depth foreground likelihood generation unit 12 can use various known methods of generating a foreground likelihood.
Next, the depth reliability generation unit 13 generates depth image reliability in step S31 after performing processes of steps S21 to S24.
In the depth reliability generation unit 13, the observed value gradient calculation unit 131 calculates gradient of the observed value (luminance value) of pixels for each small region in the depth image (step S21). The distance measurement impossible pixel determination unit 132 determines whether or not each pixel is a distance measurement impossible pixel for each small region (step S22). For example, the distance measurement impossible pixel determination unit 132 assumes that a pixel with a pixel value of 0 is a distance measurement impossible pixel. As the pixel value of 0 corresponds to the matter that no reflected light of near infrared light is obtained, the distance measurement impossible pixel determination unit 132 considers the pixel with the pixel value of 0 to be a distance measurement impossible pixel.
The first edge detection unit 133 detects edges for each small region in the depth image (step S23). The second edge detection unit 134 detects edges for each small region in the visible light image (step S24).
The depth reliability determination unit 136 determines a depth image reliability (step S31), for example, as follows.
The depth reliability determination unit 136 assigns higher reliability to regions with a smaller gradient of observed values. A small gradient of observed values corresponds to a small spatial distance difference (it means smooth) in the depth image. Since a smooth region is considered to be a stable region where the distance can be observed without being affected by a shadow of an object or a reflected light, the depth reliability determination unit 136 assigns a high reliability to this region.
The depth reliability determination unit 136 assigns lower reliability to a region consisting of distance measurement impossible pixels.
In addition, when there is a region where positions of edges in the depth image share positions of edges in the visible light image in common, the depth reliability determination unit 136 assigns higher reliability to the region.
An edge is a portion where the gradient of the observed values exceeds a predetermined threshold, but it is also a portion with a large amount of noise. However, when edges exist in a depth image at the same region where edges also exist in a visible light image, the edge in the depth image is not a false edge formed by noise. In other words, by referring to the edge in the visible light image, the depth reliability determination unit 136 increases the reliability of the portion of the depth image that is determined to be an edge.
When edges do not exist in the visible light image in the region where edges exist in the depth image, the depth reliability determination unit 136 assigns lower reliability to the region where the edges exist in the depth image.
The depth reliability determination unit 136 can conveniently set “1” (the maximum value) as a high reliability and “0” (the minimum value) as a low reliability. However, the depth reliability determination unit 136 can set a reliability that depends on the primary operating environment of the image processing device 10 and other factors.
The higher reliability assigned to the depth image means that the foreground in the depth image is reflected more strongly in the final determined foreground or foreground likelihood than the foreground in the visible light image.
The depth reliability determination unit 136 may assign a reliability of “0” or close to 0 to the region consisting of distance measurement impossible pixels, and assign a reliability of normalized cross-correlation between the region in the visible light image and the region in the depth image to the other regions (regions containing pixels other than distance measurement impossible pixels). In this case, the cross-correlation between the visible light image and the depth image is used as the reliability.
The foreground detection unit 14 determines the foreground or foreground likelihood (final foreground likelihood) (step S32). The foreground detection unit 14 uses the foreground likelihood of the visible light image generated by the visible light foreground likelihood generation unit 11, the foreground likelihood of the depth image generated by the depth foreground likelihood generation unit 12, and the depth image reliability generated by the depth reliability generation unit 13, as described below.
It is assumed that the foreground likelihood of the visible light image is Pv(x,y), the foreground likelihood of the depth image is Pd(x,y), and the depth image reliability is S(x,y). x denotes the x-coordinate value, and y denotes the y-coordinate value.
The foreground detection unit 14 determines the final foreground likelihood P(x,y) using the following equation (4).
P(x,y)={1−S(x,y)}·Pv(x,y)+S(x,y)·Pd(x,y) (4)
The foreground detection unit 14 may determine the foreground region by binarizing the foreground likelihood P(x,y) and output the foreground. The binarization is a process in which, for example, pixels with pixel values that exceed a predetermined threshold are considered to be foreground pixels.
Although a flowchart in which each step is executed sequentially is shown in
As explained above, in this example embodiment, in the image processing device 10, the visible light foreground likelihood generation unit 11 generates the foreground likelihood of the visible light image using a solar spectrum model, the depth foreground likelihood generation unit 12 generates the foreground likelihood of the depth image, and the depth reliability generation unit 13 generates reliability (depth image reliability) of the foreground likelihood of the depth image. Since the foreground detection unit 14 determines the final foreground likelihood based on the foreground likelihood of the visible light image and the foreground likelihood of the depth image, using the depth image reliability as a weight, it is possible to detect the foreground without being affected by a shadow of an object or a reflected light in both indoor and outdoor environments.
The image processing device 10 of the first example embodiment compares the edges in the visible light image with the edges in the depth image, but in the second example embodiment, the image processing device compares the edges in the visible light image with the edges in the near infrared image.
In the image processing device 20 shown in
The image processing device 20 may input a near infrared image that is stored in a memory unit (not shown) in advance.
The third edge detection unit 135 detects edges for each small region in the near infrared image (step S23B). The process of step S23 (see
Although a flowchart in which each step is executed sequentially is shown in
In this example embodiment, in the image processing device 10, the visible light foreground likelihood generation unit 11 generates the foreground likelihood of the visible light image using the solar spectrum model, the depth foreground likelihood generation unit 12 generates the foreground likelihood of the depth image, and the depth reliability generation unit 13B generates reliability (depth image reliability) of the foreground likelihood of the depth image. Since the foreground detection unit 14 determines the final foreground likelihood based on the foreground likelihood of the visible light image and the foreground likelihood of the depth image using the depth image reliability as a weight, it is possible to detect the foreground without being affected by a shadow of an object or a reflected light in both indoor and outdoor environments. In addition, since this example embodiment uses edge positions in the near infrared image when assigning reliability based on edge positions, it is expected to improve the accuracy of reliability based on edge positions in dark indoor environment.
In this example embodiment, the near infrared light camera 43 is provided separately from the depth camera 42, but if a camera that receives near infrared light is used as the depth camera 42, the depth reliability generation unit 13B may detect edges from an image from the depth camera 42 (an image obtained by receiving near infrared light for a predetermined exposure time). In that case, the near infrared light camera 43 is not necessary.
The image processing device 10 of the first example embodiment compared the edges in the depth image with the edges in the visible light image, and the image processing device 20 of the second example embodiment compared the edges in the depth image with the edges in the near infrared image, but in the third example embodiment, the image processing device compares the edges in the depth image is compared with the edges in the visible light image and the edges in the near infrared image.
In the image processing device 30 shown in
The image processing device 30 may input a near infrared image that has been previously stored in a memory unit (not shown).
The third edge detection unit 135 performs the process of step S23 and also detects edges for each small region in the near infrared image (step S23B). The other processing of the image processing device 30 is the same as the processing in the first example embodiment.
However, the depth reliability determination unit 136 compares the edge positions in the depth image with the edge positions in the near infrared image when assigning a reliability based on edge positions.
When there is a region where positions of edges in the depth image share positions of edges in the visible light image in common, further share positions of edges in the near infrared image in common, the depth reliability determination unit 136 assigns higher reliability to the region.
Alternatively, when there is a region where positions of edges in the depth image share positions of edges in the visible light image in common, the depth reliability determination unit 136 may assign a high reliability to the region in the depth image, in addition, when there is a region where positions of edges in the depth image share positions of edges in the near infrared image in common, the depth reliability determination unit 136 may assign a high reliability to the region in the depth image.
Although a flowchart in which each step is executed sequentially is shown in
In this example embodiment, in the image processing device 10, the visible light foreground likelihood generation unit 11 generates the foreground likelihood of the visible light image using the solar spectrum model, the depth foreground likelihood generation unit 12 generates the foreground likelihood of the depth image, and the depth reliability generation unit 13C generates reliability (depth image reliability) of the foreground likelihood of the depth image. Then, the foreground detection unit 14 generates the foreground likelihood. Then, the foreground detection unit 14 determines the final foreground likelihood based on the foreground likelihood of the visible light image and the foreground likelihood of the depth image using the depth image reliability as a weight, making it possible to detect the foreground without being affected by shadows of objects or reflected light in both indoor and outdoor environments. In addition, since this example embodiment uses edge positions in near infrared images when assigning reliability based on edge positions, it is expected to improve the accuracy of reliability based on edge positions in dark indoor environments.
In this example embodiment, the near infrared light camera 43 is provided separately from the depth camera 42, but if a camera that receives near infrared light is used as the depth camera 42, the depth reliability generation unit 13B may detect edges from an image from the depth camera 42 (an image obtained by receiving near infrared light for a predetermined exposure time). In that case, the near infrared light camera 43 is not necessary. In each of the above example embodiments, the image processing devices 10, 20, and 30 performed gradient detection, distance measurement impossible pixel determination, and edge detection for each small region in the image, but they may also perform gradient detection, distance measurement impossible pixel determination, and edge detection for the entire frame.
Although the components in the above example embodiment may be configured with a piece of hardware or a piece of software. Alternatively, the components may be configured with a plurality of pieces of hardware or a plurality of pieces of software. Further, part of the components may be configured with hardware and the other part with software.
The functions (processes) in the above example embodiments may be realized by a computer having a processor such as a central processing unit (CPU), a memory, etc. For example, a program for performing the method (processing) in the above example embodiments may be stored in a storage device (storage medium), and the functions may be realized with the CPU executing the program stored in the storage device.
The storage device 1001 is, for example, a non-transitory computer readable medium. The non-transitory computer readable medium includes various types of tangible storage media. Specific examples of the non-transitory computer readable medium include magnetic storage media (for example, flexible disk, magnetic tape, hard disk drive), magneto-optical storage media (for example, magneto-optical disc), compact disc-read only memory (CD-ROM), compact disc-recordable (CD-R), compact disc-rewritable (CD-R/W), and semiconductor memories (for example, mask ROM, programmable ROM (PROM), erasable PROM (EPROM), flash ROM).
A memory 1002 is a storage means implemented by a random access memory (RAM), for example, and temporarily stores data when the CPU 1000 executes processing. A conceivable mode is that the program held in the storage device 1001 or in a transitory computer readable medium is transferred to the memory 1002, and the CPU 1000 executes processing on the basis of the program in the memory 1002.
The memory 1002 is realized, for example, by RAM (Random Access Memory), and is a storage means for temporarily storing data when the CPU 1000 executes processing. It can be assumed that a program held by the storage device 1001 or a temporary computer readable medium is transferred to the memory 1002, and that the CPU 1000 executes processing based on the program in the memory 1002.
A part of or all of the above example embodiments may also be described as, but not limited to, the following supplementary notes.
(Supplementary note 1) An image processing method comprising:
generating first foreground likelihood from a visible light image,
generating second foreground likelihood from a depth image in which the same object is captured as that in the visible light image,
generating reliability of the depth image using at least the visible light image and the depth image, and
determining foreground likelihood of the object based on the first foreground likelihood and the second foreground likelihood, using the reliability of the depth image as a weight.
(Supplementary note 2) The image processing method according to Supplementary note 1, wherein
the reliability of the depth image is generated after assigning relatively high reliability to a region where gradient of the observed values in the depth image is less than or equal to a predetermined value.
(Supplementary note 3) The image processing method according to Supplementary note 1 or 2, further comprising:
detecting edges in the depth image, and
detecting edges in the visible light image,
wherein when the edges are detected in a region of the visible light image, the region being equivalent to a region where the edges are detected in the depth image, relatively high reliability is assigned to the region.
(Supplementary note 4) The image processing method according to Supplementary note 1 or 2, further comprising:
detecting edges in the depth image, and
detecting edges in a near infrared image in which the same object is captured as that in the depth image,
wherein when the edges are detected in a region of the near infrared image, the region being equivalent to a region where the edges are detected in the depth image, relatively high reliability is assigned to the region.
(Supplementary note 5) The image processing method according to Supplementary note 1 or 2, further comprising:
detecting edges in the depth image,
detecting edges in the visible light image,
detecting edges in a near infrared image in which the same object is captured as that in the depth image, and
detecting edges in a near infrared image in which the same object is captured as that in the depth image,
wherein when the edges are detected in a region of the visible light image and in a region of the near infrared image, both regions being equivalent to a region where the edges are detected in the depth image, relatively high reliability is assigned to the region.
(Supplementary note 6) The image processing method according to any one of Supplementary notes 1 to 5, further comprising:
assigning lower reliability to a region consisting of distance measurement impossible pixels.
(Supplementary note 7) An image processing device comprising:
first likelihood generation means for generating first foreground likelihood from a visible light image,
second likelihood generation means for generating second foreground likelihood from a depth image in which the same object is captured as that in the visible light image,
depth reliability generation means for generating reliability of the depth image using at least the visible light image and the depth image, and
foreground detection means for determining foreground likelihood of the object based on the first foreground likelihood and the second foreground likelihood, using the reliability of the depth image as a weight.
(Supplementary note 8) The image processing device according to Supplementary note 7, wherein
the depth reliability generation means includes at least an observed value gradient calculation unit which calculates gradient of the observed values in the depth image and a depth reliability determination unit which determines the reliability of the depth image, and
the depth reliability determination unit assigns relatively high reliability to a region where gradient of the observed values in the depth image is less than or equal to a predetermined value.
(Supplementary note 9) The image processing device according to Supplementary note 7 or 8, wherein
the depth reliability generation means includes a first edge detection unit which detects edges in the depth image, a second edge detection unit which detects edges in the visible light image, and a depth reliability determination unit which determines the reliability of the depth image, and
when the edges are detected in a region of the visible light image, the region being equivalent to a region where the edges are detected in the depth image, the depth reliability determination unit assigns relatively high reliability to the region.
(Supplementary note 10) The image processing device according to Supplementary note 7 or 8, wherein
the depth reliability generation means includes a first edge detection unit which detects edges in the depth image, a third edge detection unit which detects edges in a near infrared image in which the same object is captured as that in the depth image, and a depth reliability determination unit which determines the reliability of the depth image, and
when the edges are detected in a region of the near infrared image, the region being equivalent to a region where the edges are detected in the depth image, the depth reliability determination unit assigns relatively high reliability to the region.
(Supplementary note 11) The image processing device according to Supplementary note 7 or 8, wherein
the depth reliability generation means includes a first edge detection unit which detects edges in the depth image, a second edge detection unit which detects edges in the visible light image, a third edge detection unit which detects edges in a near infrared image in which the same object is captured as that in the depth image, and a depth reliability determination unit which determines the reliability of the depth image, and
when the edges are detected in a region of the visible light image and in a region of the near infrared image, both regions being equivalent to a region where the edges are detected in the depth image, the depth reliability determination unit assigns relatively high reliability to the region.
(Supplementary note 12) The image processing device according to any one of Supplementary notes 8 to 11, wherein
the depth reliability generation means includes a distance measurement impossible pixel determination unit which detects distance measurement impossible pixels, and
the depth reliability determination unit assigns lower reliability to a region consisting of the distance measurement impossible pixels.
(Supplementary note 13) An image processing program causing a computer to execute:
a process of generating first foreground likelihood from a visible light image,
a process of generating second foreground likelihood from a depth image in which the same object is captured as that in the visible light image,
a process of generating reliability of the depth image using at least the visible light image and the depth image, and
a process of determining foreground likelihood of the object based on the first foreground likelihood and the second foreground likelihood, using the reliability of the depth image as a weight.
While the present invention has been described above with reference to the example embodiment, the present invention is not limited to the aforementioned example embodiment. Various changes understandable by those skilled in the art within the scope of the present invention can be made for the arrangements and details of the present invention.
10, 20, 30 image processing device
11 visible light foreground likelihood generation unit
12 depth foreground likelihood generation unit
13, 13B, 13C depth reliability generation unit
14 foreground detection unit
41 visible light camera
42 depth camera
43 near infrared light camera
100 image processing device
101 first likelihood generation means
102 second likelihood generation means
103 depth reliability generation means
104 foreground detection means
131 observed value gradient calculation unit
132 distance measurement impossible pixel determination unit
133 first edge detection unit
134 second edge detection unit
135 third edge detection unit
136 depth reliability determination unit
1000 CPU
1001 storage device
1002 memory
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/JP2018/042673 | 11/19/2018 | WO | 00 |