The present application is based on and claims priority from Japanese Patent Application No. 2015-024261, filed on Feb. 10, 2015, the disclosure of which is hereby incorporated by reference in its entirety.
This invention relates to a device for processing images around a vehicle that are captured by a camera attached to the vehicle, and particularly to processing by converting the images around the vehicle into an overhead image.
When coordinate conversion is performed on images around a vehicle into overhead images or bird's-eye view images to display the converted images, three-dimensional objects are displayed with distortion. Therefore, it is difficult to determine how close the vehicle can access to the three-dimensional objects by seeing the overhead images. Accordingly, a device that corrects the distortion of three-dimensional objects generated by image conversion into overhead images, and displays corrected images has been proposed, for example (Patent Literature 1: JP 2012-147149 A, for example).
A vehicle periphery image display device disclosed in Patent Literature 1 can correct the distortion of the three-dimensional objects which are placed on a road surface. However, in a case where a portion of the three-dimensional objects is not placed on the road surface but floating in the air, when the floating portion is converted into overhead images, the portion is projected to be deformed and inclined toward the road surface in a direction away from the vehicle. Therefore, relying on the overhead images, the vehicle may hit the three-dimensional objects when the vehicle is moved close to the three-dimensional objects. In addition, when the three-dimensional object is an object such as a garage the vehicle enters, it cannot be determined whether the vehicle can enter the garage or not only by seeing the overhead images.
The present invention is made in view of above problems. The present invention determines whether there is a space to which a vehicle can access in a three-dimensional object detected on a road surface around the vehicle, and determines whether the vehicle can access to the space or not.
To solve the above problems, a vehicle accessibility determination device of the present invention includes an imager that is attached to a vehicle and captures a range including a road surface around the vehicle; an image convertor that converts an original image captured by the imager into a virtual image viewed from a predetermined viewpoint; a three-dimensional object detector that detects from the virtual image a three-dimensional object having a height from the road surface; and a vehicle accessibility determiner that determines whether the vehicle is capable of accessing to an inside of the three-dimensional object or to a clearance of other three-dimensional objects.
Embodiments of a vehicle accessibility determination device of the present invention are described with reference to the drawings.
In this embodiment, the present invention is adopted to a vehicle accessibility determination device that detects a three-dimensional object around a vehicle, determines whether the vehicle can access to the detected three-dimensional object, and then informs a driver of a result.
(Description of Overall Configuration)
First, the functional configuration of this embodiment is described with reference to
As shown in
As shown in
As shown in
As shown in
Now, the configuration of hardware is described with reference to
ECU 110 includes CPU 112, a camera interface 114, a sensor interface 116, an image processing module 118, a memory 120, and a display controller 122. CPU 112 receives and transmits required data, and executes programs. The camera interface 114 is connected to CPU 112 and controls the front camera 12a. The sensor interface 116 obtains measured results of the vehicle condition sensor 140. The image processing module 118 performs image processing with predetermined programs stored in the module 118. The memory 120 stores intermediate results of the image processing, required constants, programs, or the like. The display controller 122 controls the monitor 150.
The image inputting portion 20, the image convertor 30, the three-dimensional object detector 40, the vehicle accessibility determiner 50, and the vehicle accessible and inaccessible range display 60 described in
(Description of a Flow of Processes Performed in the Vehicle Accessibility Determination Device)
Now, a series of the processes in the vehicle accessibility determination device 100 is described with reference to a flowchart shown in
(Step S10)
An image conversion process is performed. Specifically, the captured original image is converted into the virtual image.
(Step S20)
A three-dimensional object detection process is performed. The detailed process will be described later.
(Step S30)
A three-dimensional object area extraction process is performed. The detailed process will be described later.
(Step S40)
A vehicle accessibility determination process is performed. The detailed process will be described later.
(Step S50)
A vehicle inaccessible range display process is performed. The detailed process will be described later.
Hereinafter, each of the processes performed in the vehicle accessibility determination device 100 is described in order.
(Description of Image Conversion Process)
First, the operation of the image conversion process is described with reference to
Specifically, the image inputting portion 20 converts an output from the front camera 12a (
The original image 70(t) in
The image convertor 30 converts the original image 70(t) shown in
During the generation of the virtual image 72(t), in the garage 82 (the three-dimensional object) shown on the original image 70(t) captured by the front camera 12a, the left and right leg portions 83, 85 are projected as deformed on the virtual image 72(t). Specifically, the left and right leg portions 83, 85 are projected to be inclined toward the road surface in a direction away from the vehicle 10 as the leg portions 83, 85 go upward. In other words, the leg portions 83, 85 are projected to be deformed such that a width between the leg portions 83, 85 becomes wider in the top of the virtual image 72(t). The deformation of the leg portions 83, 85, that is, the deformation in areas each having a height from the road surface occurs to spread radially toward the periphery of the virtual image 72(t) from a installation position P1 (
Further, invisible areas 86, 86, which are outside the field of view of the front camera 12a, are generated in the virtual image 72(t). Accordingly, predetermined gray values (0, for example) are stored in the invisible areas 86, 86.
(Outline Description of the Three-Dimensional Object Detection Process)
Next, with reference to
As shown in
First, as shown in
Then, as shown in
Specifically, the expected virtual image 72′(t) is generated as follows. The vehicle condition sensor 140 (
Note that the frame difference in the second frame difference calculator 40b may be performed between the expected virtual image 72′(t−Δt) and the virtual image 72(t−Δt) actually obtained at time t−Δt, after generating the expected virtual image 72′(t−Δt) at time t−Δt based on the virtual image 72(t) at time t.
Performing the frame deference between the virtual image 72(t) and the expected virtual image 72′(t) matches positions of patterns drawn on the road surface such as the lane marker 81 shown in
As opposed to above, the vehicle shadow 87 formed in the substantially same positions can be deleted as shown in
Here, a result of the frame difference performed in the first frame difference calculator 40a (
First, when areas having substantially same features (shapes, for example) are not detected with the frame difference in the first frame difference calculator 40a in vicinity to areas detected as a result of the frame difference in the second frame difference calculator 40b, the areas detected with the frame difference in the second frame difference calculator 40b can be presumed as the vehicle shadow 87 on the road surface or reflections that occur on the road surface by the sun or the lighting lamp. Then, the areas are deleted since the areas are determined not to represent three-dimensional objects.
Next, in remaining areas after the above determination, only areas inclined toward the road surface in the direction away from the vehicle 10 are detected as three-dimensional objects each having a height from the road surface. Specifically, in the example of
The fact that the areas obtained as a result of the frame differences are inclined toward the road surface in the direction away from the vehicle 10 can be determined by referring to a result of an edge detection from the vertical image 72(t) and by confirming that edge directions in the areas obtained as a result of the frame differences extend along radial lines through the installation position P1 (
(Detailed Description of Three-Dimensional Object Detection Process)
Next, the three-dimensional object detection process is specifically described with reference to
First, a result of the first frame difference shown in
Next, a result of the second frame difference shown in
Only areas having same shapes (features) and located in close positions relative to the detected first and second three-dimensional object candidate areas are selected. That is, so called deletion of non three-dimensional objects is performed. This process can delete non three-dimensional objects whose positions are moved as the vehicle moves. Specifically, the areas of the lane marker 81 and/or the vehicle shadow 87 can be deleted. The deletion of the non three-dimensional objects can be carried out, for example by performing a logical AND operation on the detected first and second three-dimensional object candidate areas.
Note that features used in this embodiment are not limited to shapes of the areas. Specifically, areas located in close positions relative to each other may be detected using the luminance differences of the areas. Further, as features, similarity (similarity in edge directions, edge intensity) obtained as a result of performing the edge detection on the virtual images may be used. Also, similarity of a histogram of the edge detection result or a density histogram obtained from each of a plurality of small blocks made by dividing the virtual images, or the like may be used.
Next, the edge detector 40c (
Relative to remaining areas after the deletion of the non three-dimensional objects, a direction in which each of the areas extends is evaluated by referring to a result of the edge detection of the pixels in the same positions as the remaining areas. The areas constituting the three-dimensional objects are converted to areas radially extending from the installation position P1 (
At this time, the areas detected with the frame differences are not directly determined as the three-dimensional areas, but the edge detection result of the virtual image 72(t) is referred. Accordingly, the influence of the time variation of the exposure characteristic of the camera, shadows, lighting, which may be mixed in the results of the frame differences, can be reduced. In addition, since the edge detection result of the virtual image 72(t) is referred, the three-dimensional object area at time t−Δt may not be left as afterimages, and an erroneous detection is suppressed in the case where the three-dimensional objects move. Accordingly, the detection performance of the three-dimensional objects can be further improved.
The three-dimensional object areas 90, 92, which are considered to constitute three-dimensional objects, are detected with a series of the processes. Then, the lowermost edge positions of the detected three-dimensional object areas 90, 92 are detected as road surface grounding positions 90a, 92a. The road surface grounding positions 90a, 92a represent positions where the three-dimensional objects are in contact with the road surface.
(Description of a Flow of Three-Dimensional Object Detection Process)
Next, a flow of the three-dimensional object detection process is described with reference to a flowchart of
(Step S100)
The frame difference is performed in the first frame difference calculator 40a.
(Step S110)
The frame difference is performed in the second frame difference calculator 40b.
(Step S120)
A result of Step S100 is compared with a result of Step S110, and areas moving with the movement of the vehicle 10 are deleted as the non three-dimensional objects.
(Step S130)
The edge detection is performed on the virtual image 72(t).
(Step S140)
In the remaining areas as a result of Step S120, only areas inclined toward the road surface in the direction away from the vehicle 10 are detected.
(Step S150)
Road surface grounding positions of the detected areas are detected.
(Description of Three-Dimensional Object Area Extraction Process)
Next, relative to the three-dimensional objects detected from the virtual image 72 (overhead image), areas corresponding to the detected three-dimensional objects are extracted from the original image 70. Hereinafter, the operation of the three-dimensional object area extraction process is described with reference to
Contrary to the case where the virtual image 72 is created, a reverse overhead view conversion is performed on the three-dimensional object areas detected from the virtual image 72 to convert the areas into the coordinate system of the original image 70. The reverse overhead view conversion identifies the three-dimensional object areas in the original image 70(t) as shown in
Next, as shown in
As can be seen from
Then, rectangular areas W2, W3, and W4 are set by increasing the vertical size of the rectangular area W1 by a predetermined value, and the density histograms H (W2), H (W3), and H (W4) of the original image 70(t) corresponding to each of the rectangular areas are created each time the rectangular area is set. Examples of the density histograms H (W2), H (W3), and H (W4) created as above are shown in
As can be seen from
In this embodiment, as shown in
Note that as a measure for evaluating similarities in the density histograms H (Wi), various methods such as a Euclidean distance determination method, a histogram intersection method, or the like have been proposed and may be used to evaluate the similarities. However, since the sizes of the rectangular areas Wi (i=1, 2, . . . ) to be set are different from each other, an area (dimension) of each of the created density histograms H (Wi) differs from each other. Therefore, in order to calculate the similarities, it is necessary to perform a normalization process in advance so that the areas (dimensions) of the density histograms H (Wi) become equal. The similarities in the density histograms H (Wi) may be evaluated based on a mode method assuming bimodality of the histograms, stability of binarization threshold by discriminant analysis, or the like, by considering variations in the area (dimension) of the three-dimensional object areas and the non-three-dimensional object areas in the rectangular areas Wi.
In addition, the rectangular areas Wi (i=1, 2, . . . ) contacting the three-dimensional object areas 90, 92 (
Therefore, it is possible to set areas contacting the three-dimensional object areas 90, 92, respectively, and to obtain the two three-dimensional object areas. However, in that case, after extracting the three-dimensional object areas respectively, it is determined whether these three-dimensional object areas are one mass or not. This determination is made by, for example, the similarity evaluation of the density histograms as the extraction of the three-dimensional object areas described above. If the areas are one mass, it is necessary to be integrated as one area.
Next, another example of extracting three-dimensional object areas is described with reference to
The three-dimensional object area extraction process described above is performed on the original image 70(t) shown in
(Description of a Flow of Three-Dimensional Object Area Extraction Process)
Next, a flow of the three-dimensional object area extraction process is described with reference to a flowchart of
(Step S200)
The three-dimensional object areas detected from the virtual image 72 are inversely converted and superimposed on the corresponding positions in the original image 70.
(Step S210)
A plurality of rectangular areas Wi (i=1, 2, . . . ) are set as three-dimensional object candidate areas. The rectangular areas Wi contact in a lateral direction the three-dimensional object areas superimposed on the original image 70.
(Step S220)
The density histograms H (Wi) are created with regard to areas corresponding to the rectangular areas Wi from the original image 70.
(Step S230)
Similarities of the created density histograms H (Wi) are calculated to find a set of the density histograms H (Wi) having a high degree of similarity. Then, in the set of the density histograms H (Wi) determined to have the high degree of similarity, a rectangular area Wi having the maximum vertical size is set as the three-dimensional object area.
(Step S240)
It is determined whether the three-dimensional object area is set or not. The process of
(Description of Vehicle Accessibility Determination Process)
Next, the operation of the vehicle accessibility determination process is described with reference to
Three-dimensional objects on the road surface do not necessarily contact the road surface at their bottom portions thereof. Specifically, there are three-dimensional objects each having a floating area which does not contact the road surface. For example, with regard to the garage 82 (three-dimensional object) shown in
Accordingly, in the vehicle accessibility determination process, a road surface projecting position, in which the other vehicle 11 is projected to the road surface from directly above, is calculated. The other vehicle 11 is a three-dimensional object area extracted in the three-dimensional object area extraction process. This process is performed in the road surface projecting position calculating portion 50b. The process can detect whether the extracted three-dimensional object area includes portions floating above the road surface.
The detection of the floating portions is performed as follows. As shown in
Next, the detected points Qi, Qj, . . . which float in the space are projected inversely on the line segment L to set road surface grounding points Ri, Rj, . . . . Note that the road surface grounding positions 94a, 94b remain as road surface grounding points.
The road surface grounding points Ri, Rj, . . . set as described above represent the road surface projecting positions in which the other vehicle 11 (three-dimensional object) is projected to the road surface from directly above. Among the road surface grounding points Ri, Rj obtained as described above, the road surface grounding points detected continuously in left and right directions are connected to each other to create road surface grounding lines L1, L2, L3. The road surface grounding lines L1, L2, L3 are equal to one line segment. With this process, it can be understand that the vehicle 10 can move toward the other vehicle 11 until the vehicle 10 reaches at least the positions of the road surface grounding lines L1, L2, L3. The vehicle 10 may hit the other vehicle 11 when the vehicle 10 is moved to the other vehicle 11 beyond the road surface grounding lines L1, L2, L3.
The road surface grounding lines L1, L2 are connected to the road surface grounding position 94a, and the road surface grounding lines L2, L3 are connected to the road surface grounding position 94b. Therefore, the road surface grounding positions 94a, 94b and the road surface grounding lines L1, L2, L3 are unified as one road surface grounding line N.
Next, it is determined whether there is a space in the detected three-dimensional object area to which the vehicle 10 can access. This determination is performed in the vehicle accessible space identifying portion 50c shown in
Specifically, for example, as shown in
That is, determination whether the vehicle 10 can farther move beyond the road surface grounding lines Li, Lj, Lk or not is performed by confirming that the length of each of the road surface grounding lines Li, Lj, Lk is longer than the width of the vehicle 10, and there are spaces higher than the height of the vehicle 10 above the road surface grounding lines Li, Lj, Lk.
Here, the original image 70(t) is generated by receiving a perspective conversion. That is, the farther the portions are located, the shorter the portions are reflected or shown in the upper part of the image. Therefore, actual lengths of the road surface grounding lines Li, Lj, Lk can be estimated with the vertical positions and the lengths of the road surface grounding lines Li, Lj, Lk on the original image 70(t), which are detected in the original image 70(t). Further, the heights of spaces to which the vehicle 10 is accessible in the positions of the road surface grounding lines Li, Lj, Lk can be estimated with the vertical positions of the road surface grounding lines Li, Lj, Lk on the original image 70(t).
For example, as shown in
Actual lengths of the road surface grounding lines Li, Lj, Lk and the heights Hi, Hj, Hk of the spaces above the road surface grounding lines Li, Lj, Lk can be respectively estimated based on installation layout information (the height of the camera, the depression angle of the camera, lens parameters) of the front camera 12a (
When the road surface grounding lines Li, Lj, Lk are detected, it can be determined whether each of the road surface grounding lines Li, Lj, Lk exceeds the width of the vehicle 10, and whether there are spaces exceeding the height of the vehicle 10 above the grounding lines Li, Lj, Lk by referring to the contents of the stored table.
(Description of a Flow of Vehicle Accessibility Determination Process)
Hereinafter, a flow of the vehicle accessibility determination process is described with reference to a flowchart of
(Step S300)
The road surface projecting positions of the three-dimensional object areas are calculated as the road surface grounding points Ri, Rj . . . . The specific content of the process is as described above.
(Step S310)
Among the road surface grounding points Ri, Rj . . . , successively located points are unified as the road surface grounding line N. Then, the length of the road surface grounding line N, and the vertical position of the road surface grounding line N on the original image 70 are calculated.
(Step S320)
The height H of the space above the road surface grounding line N is calculated.
(Step S330)
It is determined whether the vehicle 10 can access beyond the road surface grounding line N or not.
(Description of Vehicle Inaccessible Range Display Process)
Next, the operation of the vehicle inaccessible range display process is described with reference to
Note that in the display image 74(t), predetermined gray values (0, for example) are stored in the invisible areas 86 which are outside the field of view of the front camera 12a, and the invisible areas 88 which are the shadow of the garage 82 (three-dimensional object).
It should be noted that the display form of the display image 74 (t) is not limited to the examples shown in
As described above, according to the vehicle accessibility determination device 100 according to the first embodiment of the present invention as configured above, the image convertor 30 converts the original image 70 including the road surface around the vehicle 10 which is captured by the front camera 12a (imager 12) into the virtual image 72 (overhead image) viewed from a predetermined viewpoint. The three-dimensional object detector 40 detects from the virtual image 72 the three-dimensional object having a height from the road surface. The vehicle accessibility determiner 50 determines whether the vehicle 10 can access to the inside of the detected three-dimensional object or to a clearance among other three-dimensional objects. Accordingly, even if the three-dimensional object has a floating area which does not contact the road surface, it can be detected whether the vehicle 10 can access to the space or not. Therefore, it is possible to prevent the vehicle 10 from hitting the three-dimensional object in advance.
In addition, according to the vehicle accessibility determination device 100 according to the first embodiment of the present invention as configured above, the three-dimensional object area extracting portion 50a extracts an area corresponding to the three-dimensional object from the original image 70. The road surface projecting position calculating portion 50b calculates the presence or absence of a floating area that does not contact the road surface and the height of the floating area from the road surface relative to the three-dimensional object area extracted by the three-dimensional object area extracting portion 50a. The floating area constitutes the three-dimensional object. Also, the road surface projecting position calculating portion 50b calculates the road surface projecting position in which the floating area is projected to the road surface from directly above. The vehicle accessible space identifying portion 50c identifies whether there is the space inside the detected three-dimensional object or in the clearance among other three-dimensional objects to which the vehicle 10 can access, based on the presence or absence of the floating area and the road surface projecting position calculated by the road surface projecting position calculating portion 50b. Accordingly, the presence or absence of the floating area and the road surface projecting position can be calculated with a simple process.
Further, according to the vehicle accessibility determination device 100 according to the first embodiment of the present invention as configured above, the vehicle accessible and inaccessible range display 60 superimposes, to the road surface position of the virtual image 72, the vehicle inaccessible range to which the vehicle 10 cannot access determined in the vehicle accessibility determiner 50, or the vehicle accessible range to which the vehicle 10 can access determined in the vehicle accessibility determiner 50, and displays the superimposed ranges. Accordingly, it is possible to visualize how far the vehicle 10 can access to the three-dimensional object even if the three-dimensional object has the floating area that does not contact the road surface. Therefore, it is possible to prevent the vehicle 10 from hitting the three-dimensional object in advance.
Moreover, according to the vehicle accessibility determination device 100 according to the first embodiment of the present invention as configured above, the three-dimensional object detector 40 deletes the non three-dimensional objects. The deletion of the non three-dimensional objects is made based on a result of the frame difference between the two virtual images 72(t−Δt), 72(t) (overhead images) calculated in the first frame difference calculator 40a, and a result of the frame difference between the expected virtual image 72′(t) and the other virtual image 72(t) calculated in the second frame difference calculator 40b. The two virtual images 72(t−Δt), 72(t) (overhead images) are respectively generated from the two original images captured at different times. The expected virtual image 72′(t) is expected to be generated from the original image captured at the same time as the time at which the original image (t) is captured which is the conversion source of the other virtual image 72(t), and is expected based on the traveling amount and the moving direction of the vehicle 10 from the virtual image 72(t−Δt) which is one of the two virtual images (overhead images). Also, the three-dimensional object detector 40 detects the three-dimensional object on the road surface by referring to the edge information of the virtual image 72(t) detected in the edge detector 40c. Accordingly, it is possible to identify the three-dimensional object having a height from the road surface from paints, stains or darts on the road, or the vehicle shadow 87, and detect the three-dimensional object with a simple process.
Furthermore, according to the vehicle accessibility determination device 100 according to the first embodiment of the present invention as configured above, areas having same shapes (features) and located in close positions are detected from the first three-dimensional object candidate areas detected by the first frame difference calculator 40a and the second three-dimensional object candidate areas detected by the second frame difference calculator 40b. When the detected areas are detected to be inclined toward the road surface in a direction away from the vehicle 10 based on the edge detection result of the virtual image 72(t) by the edge detector 40c, the areas are detected as areas indicating the three-dimensional objects on the road surface. Accordingly, it is possible to reduce the influence of the time variation of the exposure characteristics of the camera, shadows, lighting, which may be mixed in the results of the frame differences, by referring to the edge detection result of the virtual image 72(t). Therefore, the detection performance of the three-dimensional object can be further improved.
In the first embodiment, an example in which one front camera 12a is used as the imager 12 is described. However, the number of cameras to be used is not limited to one. That is, it is also possible for the vehicle accessibility determination device to include a plurality of cameras directed to the front, the left, the right, and the rear of the vehicle 10 so as to be able to monitor the entire circumference of the vehicle 10. In this case, the original images captured by the respective cameras are respectively converted into a virtual image (overhead image), and then combined into one composite image. Processes described in the embodiment are performed on the composite image.
Further, in the first embodiment, an example of determining whether the vehicle 10 can access to the garage 82 or the other vehicle 11, each of which is a three-dimensional object, is described. However, the invention is not limited to a case where it is determined whether the vehicle 10 can access to the inside of a single three-dimensional object or not. That is, the invention may be applied to a case where it is determined whether the vehicle 10 can access to a space between two vehicles to park the vehicle 10 when the other two vehicles are parked with the space therebetween in a parking lot which does not have lines indicating a parking space for each vehicle. In this case, accessibility of the vehicle 10 is determined by detecting each of the two vehicles as the three-dimensional objects, by respectively calculating the width and the height of the space between the vehicles, and by comparing the calculated size (width and height) with that of the vehicle 10.
In addition, the procedure of the image processing described in the first embodiment is not limited to one described above in the embodiment. For example, the garage 82 is detected as one three-dimensional object from the virtual image 72(t) when the garage 82 is uninterruptedly reflected within the virtual image 72(t). Assuming above case as well, a procedure including detecting the road surface grounding line N from the virtual image 72(t), and subsequently calculating the height H of the space above the road surface grounding line N may be applicable. Taking such the procedure, when the entire three-dimensional object is reflected in the virtual image 72(t), the three-dimensional object region extraction process and the accessibility determination process are performed by using only the virtual image 72(t). Accordingly, a series of processes can be performed more easily.
Further, according to the vehicle accessibility determination device 100 described in the first embodiment, it is configured to obtain the road surface grounding line N of the detected three-dimensional object, and to display and inform to a driver only the range to which the vehicle 10 cannot access in the road surface grounding line N. However, the invention is not limited to the above configuration. That is, it may be configured to automatically park the vehicle 10 based on the information of the range to which the vehicle 10 can access in the calculated road surface grounding line N, for example.
Moreover, in the first embodiment, the frame difference between the virtual images is performed to detect the three-dimensional object. However, the frame difference to be performed at that time is not limited to a frame difference between gray values representing brightness of the virtual images. That is, it is possible to perform an edge detection on the virtual images and then perform a frame differences between the virtual images in which the detected edge strengths are stored. It is also possible to perform a frame difference between the virtual images in which the detected edge directions are stored to detect an area where the change is occurred. Further, it is also possible to divide the virtual image into a plurality of small blocks and to use similarity of the histogram of the density histogram of each small block and/or the edge detection result.
Although the embodiments of the present invention are described in detail with reference to the drawings, the embodiments are only examples of the present invention. Therefore, the present invention is not limited only to the configurations of the embodiments, and it will be appreciated that any design changes and the like that do not depart from the gist of the present invention should be included in the invention.
Number | Date | Country | Kind |
---|---|---|---|
2015-024261 | Feb 2015 | JP | national |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/JP2016/050236 | 1/6/2016 | WO | 00 |
Publishing Document | Publishing Date | Country | Kind |
---|---|---|---|
WO2016/129301 | 8/18/2016 | WO | A |
Number | Name | Date | Kind |
---|---|---|---|
20080205706 | Hongo | Aug 2008 | A1 |
20120219183 | Mori | Aug 2012 | A1 |
20140218481 | Hegemann et al. | Aug 2014 | A1 |
Number | Date | Country |
---|---|---|
10-062162 | Mar 1998 | JP |
11-16097 | Jan 1999 | JP |
2009-093332 | Apr 2009 | JP |
2009-188635 | Aug 2009 | JP |
2011-57101 | Mar 2011 | JP |
2012-147149 | Aug 2012 | JP |
2012-175314 | Sep 2012 | JP |
2012039004 | Mar 2012 | WO |
Entry |
---|
International Search Report dated Mar. 29, 2016 in International (PCT) Application No. PCT/JP2016/050236. |
Extended European Search Report dated Sep. 7, 2018 in European Application No. 16748946.7. |
Number | Date | Country | |
---|---|---|---|
20180032823 A1 | Feb 2018 | US |