This application is based upon and claims the benefit of priority from the prior Japanese Patent Applications No. 2017-049685, filed on Mar. 15, 2017; the entire contents of which are incorporated herein by reference.
Embodiments of the present invention relate to a mobile body spatial information calculation apparatus and a collision avoidance system.
Conventionally, driving support systems for supporting driving of mobile bodies such as vehicles are being developed. Driving support systems are intended to assist driving using a computer based on information acquired from a radar or camera mounted on a vehicle or the like. For example, a collision avoidance system gives assistance to avoid a collision by observing surrounding of a vehicle, issuing a warning to a driver or operating a brake or steering wheel on behalf of the driver whenever there is a possibility of collision. There is a possibility that adopting such a collision avoidance system will prevent accidents beforehand and drastically improve safety of vehicles.
The collision avoidance system projects a target object having a possibility of being collided into a metric space (XY space) using information acquired by a camera or a sensor such as a radar and executes path planning on the metric space to avoid a collision with an obstacle and guide the vehicle to an optimum path. A vehicle-mounted collision avoidance system needs to execute path planning in real time under various environments and requires hardware with sufficient processing capability.
However, XY space conversion processing that projects sensor input into the metric space requires an enormous amount of calculation, which poses a problem of adversely affecting the performance of the collision avoidance system.
A mobile body spatial information calculation apparatus according to an embodiment includes an input section configured to receive sensor information including information on one or more target objects based on a position of an own mobile body from a sensor apparatus, an object recognition section configured to recognize the target object based on the sensor information, a calculation section configured to calculate a collision prediction time and a target portion angle based on the sensor information on the target object recognized by the object recognition section, and a spatial information generation section configured to generate spatial information including the collision prediction time and the target portion angle using the collision prediction time and the target portion angle calculated by the calculation section and store the generated spatial information.
Hereinafter, embodiments will be described in detail with reference to the accompanying drawings.
Some of related arts of a collision avoidance system use, for example, a lidar apparatus or an image sensor. Each of such related arts acquires point cloud data of another vehicle using sensor information of a lidar apparatus mounted on the own vehicle and projects the point cloud data into a metric space (XY space) through ray casting processing. The point cloud data is then subjected to path planning in the XY space into which the point cloud data is projected. That is, mapping in the related arts converts angle information acquired from the sensor information to an XY space, and so mapping requires an enormous amount of calculation. Note that when it is not possible to avoid a collision with the other vehicle using the brake, the related art further requires processing of converting the information of path planning acquired from the XY space to information of a steering angle.
On the other hand, the collision avoidance system according to the present embodiment adopts a r-O space using a r margin corresponding to TTC (time to clash) which is a collision prediction time with respect to a target object such as another vehicle and an angle (hereinafter referred to as “target portion angle”) indicating a direction in which each part of the target object (hereinafter referred to as a “target portion”) is located using a traveling direction of the own vehicle as a reference. The mobile body spatial information calculation apparatus of the collision avoidance system according to the present embodiment performs mapping from sensor information of a sensor mounted on the own vehicle to the τ-θ space.
The amount of calculation required for mapping to the τ-θ space in the present embodiment is extremely small compared to mapping to the XY space in the related art. Furthermore, since a conversion between the target portion angle θ and the steering angle is relatively easy, even when it is not possible to avoid a collision with the other vehicle using the brake, the processing of converting the information of the path planning acquired from the τ-θ space to the information of the steering angle can be calculated with an extremely small amount of calculation.
The example in
The own vehicle M1 is provided with an image sensor 11 configured to take images within a predetermined view range in the traveling direction. A τ-θ space is generated using the sensor information acquired from the image sensor I1.
The τ-θ space shown in
In the τ-θ space in
The τ-θ space expresses a positional relationship when it is assumed that the own vehicle and the target object are relatively in the state of uniform linear motion. The time axis direction indicates not only the distance information between the own vehicle and the target object but also the τ margin including information on a relative speed, and the collision prediction time can be easily known from the value of the vertical axis of the τ-θ space.
The target objects such as the roadway outside line L1 and the wall O1, which are substantially straight lines, are expressed in the τ-θ space by a roadway outside line ML1 and a wall MO1 which are curved lines, the target portion angle of which decreases as the collision prediction time increases. Similarly, the traveling direction of the other vehicle M2 in the τ-θ space has a curved shape as indicated by an arrow attached to the other vehicle MM2.
That is, in a supposed case where the own vehicle and the target object are relatively in the state of uniform linear motion in a parallel direction, the τ-θ space corresponds to a change in the positional relationship between the target object and the own vehicle with the passage of time, is similar to a state of an image picked up by a camera or the like, allowing the driver of the own vehicle to easily intuit the collision prediction time and the target object position.
Using the time at which the state of the τ-θ space in
In the present embodiment, the speed and the traveling direction of the own vehicle are controlled so that the target object is not included in the collision range. Note that it is possible to generate a τ-θ space in which sensor information is corrected, for example, in accordance with the steering angle of the own vehicle.
The collision avoidance system 30 in
A driving control section 22, a wheel steering apparatus 25 and a drive apparatus 26 shown in
Note that an automatic driving control section 20 and the driving control section 22 may be configured by a processor such as a CPU or operated by a program stored in a memory, which is not shown, to implement various functions.
The drive apparatus 26 such as an engine or a motor causes the wheels 42 to rotate and can cause the automobile 40 to move forward or backward. The drive control apparatus 24 drives the drive apparatus 26 according to a control signal from the automatic driving control section 20, and can control rotation of the wheels 42. Furthermore, the automobile 40 is provided with an accelerator pedal, which is not shown, and the drive control apparatus 24 controls the drive apparatus 26 based on the operation of the accelerator pedal by the driver, and can control a rotation speed or the like of the wheels 42.
The automatic driving control section 20 shown in
Sensor information from the sensor apparatus 21 is given to the sensor information input section 2. The sensor information input section 2 sequentially outputs the sensor information inputted at a predetermined rate to the object recognition section 3. The object recognition section 3 recognizes each object such as a person, vehicle or obstacle as a target object based on the sensor information. For example, the object recognition section 3 recognizes the shape of an object based on the sensor information, compares it with shapes and features of various objects stored in a memory which is not shown, and recognizes the object as a target object. Alternatively, the object recognition section 3 may also recognize the shape of the moving direction front side of a mobile body in an image. Furthermore, the object recognition section 3 may also recognize a shape portion obtained from, for example, a front view or back view of an automobile as a target object. The object recognition section 3 outputs information on the recognized object (target object) to the τ and θ calculation section 4. When a picked-up image is inputted as the sensor information, the object recognition section 3 may output coordinate information of the target object on an image and information on a view angle to the τ and θ calculation section 4.
The τ and θ calculation section 4 calculates a τ margin about the object recognized by the object recognition section 3 using the inputted sensor information. The τ margin indicates a time allowance until the own vehicle and the target object collide with each other if both keep the current relative speed. When the own vehicle and the target object relatively perform uniform linear motion, the τ margin can be expressed by the following equation (1) using a view angle (hereinafter referred to as “target object view angle”) ϕ when the target object is viewed from the own vehicle. The equation (1) shows that the τ margin can be calculated using the target object view angle ϕ and a time derivative value thereof.
τ=ϕ(dϕ/dt) (1)
Furthermore, a document (Real-time time-to-collision from variation of Intrinsic (written by Amaury Negre, Clistophe Braillon, Jim Crowley, Christian Laugier)) shows that the τ margin can be calculated by the following equation (2). Note that in equation (2), “Z” denotes a distance and “s” denotes the size of a target object.
τ=−Z(Z/dt)=s/(s/dt) (2)
For example, the τ and θ calculation section 4 calculates the τ margin through calculation in the above equations (1) and (2) or the like. For example, suppose an angle of view (view range) of the sensor apparatus 21 fixed to the automobile is known.
The view range of the sensor apparatus 21 is, for example, a known predetermined range with the traveling direction, which is a direction of a roll axis of the own vehicle M1, as a reference. The direction of the target object seen from the sensor apparatus 21 can be expressed by an angle with the traveling direction as a reference. Since the view range of the sensor apparatus 21 is known, each coordinate position in a picked-up image and an angle with respect to the traveling direction have a one-to-one correspondence. Therefore, it is possible to easily calculate the target object view angle ϕ from the coordinate position of the target object in the picked-up image.
For example, the τ and θ calculation section 4 uses a table that describes a correspondence relation between the coordinate position in the picked-up image and an angle with respect to the traveling direction. The τ and θ calculation section 4 may calculate the target object view angle ϕ with reference to the table using the output of the object recognition section 3.
Furthermore, the τ and θ calculation section 4 calculates the target object view angle ϕ and its time derivative value using picked-up images sequentially inputted at a predetermined frame rate, and calculates the τ margin through calculation in the above equation (1).
Furthermore, the τ and θ calculation section 4 calculates the target portion angle θ of the target object with respect to the own vehicle. The target portion angle θ can be expressed by an angle using the traveling direction as a reference. The target portion angle θ can be calculated from coordinate positions of respective portions of the target object in the picked-up image.
The τ and θ calculation section 4 may designate the traveling direction in the image as a representative target portion angle θ0 and assume that a predetermined angle range around the representative target portion angle θ0 corresponds to the target object. For example, when the target object is an automobile, the target object may be assumed to be located within a range between target portion angles θ1 and θ2 at both ends of the front part (target object view angle 4).
Note that the representative target portion angle θ0 may be considered as an angle indicating the direction of the target object using the traveling direction of the own vehicle as a reference (hereinafter referred to as “target portion angle”). That is, regarding a known object such as an automobile, a target object of a known size may be arranged in the angle direction of the representative target portion angle θ0 in the τ-θ space and the τ-θ space may be expressed as a space based on the τ margin and the target portion angle.
Thus, the τ and θ calculation section 4 can calculate the τ margin and the target portion angle through simple calculation with an extremely small amount of calculation. The τ margin and the target portion angle are supplied to the τ-θ space generation section 5.
Note that the τ and θ calculation section 4 may also calculate the τ margin using not only the above equations (1) and (2) but also various publicly known techniques. For example, when an SfM (structure from motion) technique for forming a 3D image from a 2D image is adopted, it is possible to calculate the τ margin in the process of determining an arrangement of pixels on the image in a 3D space. Furthermore, when the sensor apparatus 21 is constructed of a lidar apparatus or the like, the τ and θ calculation section 4 may directly acquire the target portion angle θ from the output of the object recognition section 3. Note that when the τ margin is calculated by adopting the lidar apparatus or SfM, the target object need not be a specific person or object, but the τ margin may be calculated assuming respective points or a set of predetermined points as the object.
The τ-θ space generation section (spatial information generation section) 5 plots the τ margin and the target portion angle in the τ-θ space. The τ-θ space generation section 5 acquires information on the own vehicle M1 and generates τ-θ spatial information including information on a collision range. Note that the own vehicle information may include speed information of the own vehicle, steering-related information or the like. For example, the own vehicle information can be acquired from the driving control section 22. The τ-θ space generation section 5 causes the storage section 6 to store the generated τ-θ spatial information. The τ-θ space generation section 5 supplies the τ-θ spatial information to the recognized object management section 11.
Note that
Thus, the spatial information calculation section 10 calculates the τ margin and the target portion angle with a small amount of calculation and performs mapping from the output of the sensor apparatus 21 to the τ-θ space through the processing of plotting the calculated τ margin and target portion angle in the τ-θ space. That is, the mapping in the spatial information calculation section 10 in the present embodiment can be performed with an extremely small amount of calculation compared to the mapping from optical information to an XY space according to the related art.
The recognized object management section 11 manages the collision prediction time, that is, successively changing τ-θ spatial information for each target object in the τ-θ space and outputs the τ-θ spatial information to the control amount calculation section 12. The control amount calculation section 12 determines the position of each target object, collision possibility or the like with respect to the momentarily changing own vehicle based on the τ-θ spatial information for each target object.
The control amount calculation section 12 executes path planning using the τ-θ spatial information. That is, when the target object is an object having a possibility of collision, the control amount calculation section 12 calculates an amount of control to avoid a collision with the target object and outputs the amount of control as a steering control signal and a speed control signal. The steering control signal is intended to control an amount of steering, that is, the orientation of the wheels 42 that define the traveling direction of the automobile 40 and is supplied to the steering control apparatus 23. The speed control signal is intended to control the rotation speed of the wheels 42 that define the speed of the automobile 40, and is supplied to the drive control apparatus 24.
When the control amount calculation section 12 is informed with the τ-θ spatial information that the obstacle MO3 exists within the collision range H1, the control amount calculation section 12 sets an arrival target region (painted-out region) PM1 of the own vehicle at a position where there will be no collision with the obstacle MO3 or the other target object at a time at which a collision is estimated to occur. Next, the control amount calculation section 12 determines a steering direction at every predetermined interval to reach the arrival target region PM1. The example in
Note that the example in
Note that although the present embodiment has described an example where an amount of control is calculated to avoid a collision with a target object located within a collision range and the speed or steering angle is automatically controlled, a display or an alarm such as warning sound may be generated to indicate the presence of a target object within the collision range. The control amount calculation section 12 can generate a signal for displaying the presence of a target object, a collision avoidance method or generating an alarm for the driver using a warning sound or voice. The monitor 27 outputs a video and sound based on the inputted signal.
Next, operation of the present embodiment will be described with reference to
Suppose the own vehicle is running at a predetermined speed now. The sensor apparatus 21 acquires sensor information on objects around the own vehicle at a predetermined rate. The sensor information is sequentially inputted to the sensor information input section 2 and outputted to the object recognition section 3 (S1). The object recognition section 3 recognizes various objects as target objects based on the inputted sensor information and outputs the information on the target object to the τ and θ calculation section 4 (S2).
Regarding the target object, the τ and θ calculation section 4 calculates a τ margin and a target portion angle in the case where the own vehicle and the target object keep the current speeds using the inputted information and outputs the τ margin and the target portion angle to the τ-θ space generation section 5 (S3).
The τ-θ space generation section 5 generates a τ-θ space by plotting the τ margin and the target portion angle, and causes the storage section 6 to store the τ-θ spatial information (S4). The τ-θ space generation section 5 may also acquire own vehicle information and generate τ-θ spatial information including information on the collision range. The τ-θ spatial information is supplied to the recognized object management section 11.
The spatial information calculation section 10 performs mapping from the output of the sensor apparatus 21 to the τ-θ space through processing with an extremely small amount of calculation of calculating the τ margin and the target portion angle and plotting the calculated τ margin and target portion angle.
The recognized object management section 11 manages a collision prediction time for each target object (S5). The recognized object management section 11 outputs successively changing τ-θ spatial information to the control amount calculation section 12 for each target object.
The control amount calculation section 12 determines a position and a collision possibility or the like of each target object with respect to the momentarily changing own vehicle according to the τ-θ spatial information of each target object and determines the traveling direction and the speed to avoid a collision. The control amount calculation section 12 supplies a steering amount control signal for obtaining the determined traveling direction and a speed control signal for obtaining the determined speed to the driving control section 22.
The steering control apparatus 23 drives the wheel steering apparatus 25 based on the steering control signal and controls the orientation of the wheels 42. The drive control apparatus 24 drives the drive apparatus 26 based on the speed control signal and controls the rotation speed of the wheels 42. Thus, traveling of the own vehicle is automatically controlled so as to avoid obstacles.
Note that
In this way, the present embodiment generates τ-θ spatial information by plotting the collision prediction time and the target portion angle calculated based on sensor information. The collision prediction time and the target portion angle can be calculated with an extremely small amount of calculation and the amount of calculation required for mapping from sensor information to the τ-θ space is extremely small. This makes it possible to improve performance of the collision avoidance system.
(Modifications)
The sensor information input section 2 outputs the sensor information from the lidar apparatus RI and the camera I2 to the object recognition section 3. The object recognition section 3 outputs information on the target object recognized based on the sensor information to the τ and θ calculation section 4.
The τ and θ calculation section 4 calculates a τ margin τ and a target portion angle θ of the target object based on the sensor information. The τ and θ calculation section 4 uses the τ margin and the target portion angle as an integrated unit. For example, the τ and θ calculation section 4 may use the calculated τ margin and target portion angle only when it is determined that the calculation result is consistent through matching operation between the τ margin and the target portion angle obtained according to the sensor information of the lidar apparatus RI and the sensor information of the camera I2.
Furthermore, the τ and θ calculation section 4 may also calculate the τ margin and the target portion angle using sensor information that integrates the sensor information of the lidar apparatus RI and the sensor information of the camera I2. For example, the τ and θ calculation section 4 may correct the sensor information of the lidar apparatus RI with the sensor information of the camera I2, and then calculate the τ margin and the target portion angle. In this case, the τ and θ calculation section 4 can adopt a publicly known technique such as SfM.
In such a modification, in S11 in
More specifically, as shown in S21 in
When it is determined through a matching operation that the target objects 0 and 1 are identical objects, the object recognition section 3 outputs the sensor information of the target objects 0 and 1 to the τ and θ calculation section 4. The τ and θ calculation section 4 calculates a distance from the own vehicle to the target object 1 and a differential value of the distance based on the sensor information of the target objects 0 and 1 (S28). The τ and θ calculation section 4 calculates a s margin from the distance and the differential value of the distance and calculates a target portion angle from the sensor information of the target object 1 (S29).
On the other hand, the sensor information of the camera I2 is inputted to the sensor information input section 2 in S1 in
The τ and θ calculation section 4 integrates the τ margin and the target portion angle calculated based on the sensor information from the lidar apparatus RI and the camera I2 (S13). The integrated τ margin and target portion angle is supplied to the τ-θ space generation section 5.
Note that when matching is applied to a specific object such as a person or vehicle using the camera I2 and the lidar apparatus RI as the integration technique, one of the results of the camera I2 and the lidar apparatus RI may be adopted with higher priority based on the magnitude of noise, for example. At places where matching is not achieved, both results may be displayed or only portions in which both results match may be displayed. Furthermore, when mapping is performed to the τ-θ space, the output of each sensor may be multiplied by an arbitrary probability distribution as another integration technique. According to this technique, portions mapped by a plurality of sensors have a higher probability of existence. Furthermore, the recognized object management section 11 may be caused to perform management on the assumption that objects are located at places where the probability of existence reaches or exceeds a threshold. The rest of operation is similar to the operation in the example in
Thus, in the modification, the τ margin and the target portion angle are calculated based on the sensor information of the lidar apparatus RI and the sensor information of the camera I2, and calculations can be carried out with higher accuracy than calculations using the sensor information of either one.
(Modification 2)
For the target object located within the collision range, information on the collision prediction time and collision avoidance measures may be displayed in the vicinity of the display. The example in
Note that for a target object, the collision prediction time of which is equal to or greater than a predetermined time, the display of the collision prediction time may be omitted. Furthermore, information on the steering angle necessary to avoid the collision may also be displayed in addition to the steering instruction display 53. Furthermore, when the product of the area of the inside of the frame and the collision prediction time until the frame collides with the target object exceeds a predetermined threshold, the display mode of the whole display screen 27a may be changed by determining that the danger of collision is increased. For example, the brightness of display may be changed in a constant cycle.
In the modification 2, it is possible, through the monitor, to cause the driver to surely recognize the danger of collision and further contribute to safe driving.
(Modification 3)
The example in
In the τ-θ space in
When the control amount calculation section 12 recognizes, based on the output of the recognized object management section 11, that the own vehicle MM11 will collide with the obstacle MM12 in t0 seconds, the control amount calculation section 12 sets an arrival target position so that the own vehicle MM11 avoids a collision with the obstacle MM12 t0 seconds later.
As shown in
Next, the control amount calculation section 12 sets a yaw angle at every predetermined interval Δt seconds in order to reach the arrival target yaw angle AP1. The example in
Furthermore, the control amount calculation section 12 estimates a τ-θ space after a predetermined time elapses, and can thereby improve the certainty of collision avoidance. The obstacle MM12 which has a negative speed relative to the own vehicle MM11 has a size that increases in the yaw angle direction in the τ-θ space with the passage of time.
The description in
Note that to determine the arrival target yaw angle AP2 after the time t0+t1 elapses, the control amount calculation section 12 executes affine transformation A (τ, θ) for performing translation or rotation in the τ-θ space and thereby estimates the τ-θ space when the origin is moved after an elapse of t0.
The control amount calculation section 12 determines the steering angle at a predetermined time interval so as to reach the arrival target yaw angles AP1 and AP2. Reliable collision avoidance is achieved in this way.
When the length of the unknown region OV in the time axis direction differs, the range AH3 in
The present embodiment is different from the first embodiment in that an automatic driving control section 60 to which a target object movement estimation section 13 is added is adopted instead of the automatic driving control section 20. The target object movement estimation section 13 supplies a result of estimating a movement of a target object based on the output of the recognized object management section 11 to the control amount calculation section 12. The control amount calculation section 12 executes path planning using the output of the recognized object management section 11 and the estimation result of the target object movement estimation section 13.
Note that the target object movement estimation section 13 may estimate the movement of the target object using not only the output of the recognized object management section 11 but also the own vehicle information.
The movement estimation section 63 outputs the result of estimating the movement of each target object using the τ margin and the predicted value τe thereof, and the target portion angle and the predicted value θe thereof to the control amount calculation section 12.
Next, operation of the present embodiment will be described with reference to
Now, suppose that the latest τ margin calculated when the other vehicle M31 is located at a distance x is stored in the recognized object management section 11. For example, suppose the τ margin of the other vehicle M31 at the present time is 1 second. The position of the other vehicle M31 when the other vehicle M31 is in the state of uniform linear motion relatively parallel to the own vehicle M21 is shown by a broken line. When the movement of the target object is not estimated, uniform linear motion is assumed. In this case, the other vehicle M31 is estimated to be located at a distance 2× (M31cvbe) one second before and located at a position (M31cvaf) at which it passes by the own vehicle M21 one second later.
The other vehicle M31 located at a distance x at the present time is located at a position of distance 4× one second before the present time as shown in
The other vehicle M31 is represented by “MM31” in
Furthermore, the target portion angle of the other vehicle M31 at the present time is substantially equal to the target portion angle of the other vehicle M31 be one second before. That is, the target portion angle remains unchanged from one second before to the present time and it can be seen that the other vehicle M31 is advancing toward the own vehicle M21.
The target object movement estimation section 13 estimates such a movement of the target object by calculating, for example, predicted values τe and θe at a time different from the present time. The target object movement estimation section 13 supplies the result of estimating the movement of the other vehicle to the control amount calculation section 12. The control amount calculation section 12 controls the steering angle and the speed so that the own vehicle M21 does not collide with the other vehicle M31 based on outputs of the recognized object management section 11 and the target object movement estimation section 13.
The present embodiment estimates the movement of a target object according to, for example, r margins at different times and predicted values thereof; and target portion angles and predicted values thereof. Path planning of the own vehicle is executed based on the τ-θ spatial information and the movement estimation result. High accuracy path planning for avoiding a collision can be achieved in this way.
While certain embodiments have been described, these embodiments have been presented by way of example only, and are not intended to limit the scope of the inventions. Indeed, the novel devices and methods described herein may be embodied in a variety of other forms; furthermore, various omissions, substitutions and changes in the form of the embodiments described herein may be made without departing from the spirit of the inventions. The accompanying claims and their equivalents are intended to cover such forms or modification as would fall within the scope and spirit of the inventions.
Number | Date | Country | Kind |
---|---|---|---|
2017-049685 | Mar 2017 | JP | national |
Number | Name | Date | Kind |
---|---|---|---|
8862383 | Tsuchida | Oct 2014 | B2 |
9798002 | Baba | Oct 2017 | B2 |
9824586 | Sato et al. | Nov 2017 | B2 |
20070021915 | Breed | Jan 2007 | A1 |
20070112514 | Ekmark et al. | May 2007 | A1 |
20080172156 | Joh | Jul 2008 | A1 |
20100253598 | Szczerba | Oct 2010 | A1 |
20110301845 | Harada | Dec 2011 | A1 |
20130238192 | Breu | Sep 2013 | A1 |
20140005875 | Hartmann | Jan 2014 | A1 |
20150183431 | Nanami | Jul 2015 | A1 |
20150298621 | Katoh | Oct 2015 | A1 |
20150371095 | Hartmann | Dec 2015 | A1 |
20170008521 | Braunstein | Jan 2017 | A1 |
20170018186 | Probert | Jan 2017 | A1 |
20170050627 | Lee | Feb 2017 | A1 |
20170210359 | Brandin | Jul 2017 | A1 |
20170274821 | Goudy | Sep 2017 | A1 |
20170278401 | Probert | Sep 2017 | A1 |
20170287332 | Ranninger Hernandez | Oct 2017 | A1 |
20180056998 | Benosman | Mar 2018 | A1 |
20180259339 | Johnson | Sep 2018 | A1 |
20190098953 | Strickland | Apr 2019 | A1 |
Number | Date | Country |
---|---|---|
2010-070047 | Apr 2010 | JP |
2014-029604 | Feb 2014 | JP |
2014-115887 | Jun 2014 | JP |
2014-122873 | Jul 2014 | JP |
5709048 | Apr 2015 | JP |
2014-162941 | Oct 2014 | WO |
Entry |
---|
Japanese Office Action dated Nov. 5, 2019 in corresponding Japanese Patent Application No. 2017-049685 with English Translation thereof. |
Number | Date | Country | |
---|---|---|---|
20180268699 A1 | Sep 2018 | US |