The present invention relates to a filtering process technique.
Recently, the progress of deep learning is increasing the accuracy of image recognition. A CNN (Convolutional Neural Network) is known as a method to be used in deep learning.
In the CNN, a plurality of layers are hierarchically connected, and each layer contains a plurality of feature images.
A feature image in a current layer is calculated by using a feature image in a preceding layer and a filter coefficient corresponding to the preceding layer. Calculating one feature image in the current layer requires information of a plurality of feature images in the preceding layer. A product-sum operation for obtaining a feature image in the current layer is performed in accordance with equation (1) below:
O
i,j(n)=Σm=1MΣx=0X−1Σy=0Y−1(li+x,j+y(m)×Cx,y(m,n)) (1)
where n is the index of a feature image in the current layer, and m (m=1 to M) is the index of a feature image in the preceding layer. Oi,j(n) indicates feature data (a product-sum operation result) in a position (i, j) in a feature image having index=n in the current layer Ii,j(m) indicates feature data in a position (i, j) in a feature image having index=m in the preceding layer. Cx,y(m, n) indicates a filter coefficient between the feature image having index=n in the current layer and the feature data in the position (x, y) in the feature image having index=m in the preceding layer. In equation (1), the number of filter coefficients (C0,0(m, n) to CX−1,Y−1(m, n)) is (X×Y), and they change in accordance with feature images. X and Y are variables indicating a reference range. The number of product-sum operations for calculating feature data of the current layer is (M×X×Y).
After the filtering process, processes such as an activation process and pooling are performed based on the network structure by using the product-sum operation result Oi,j(n), thereby calculating feature images of the current layer.
The CNN is also applied to image segmentation. Dilated convolution described in Y. Wei, et al., “Revisiting Dilated Convolution. A Simple Approach for Weakly- and Semi-Supervised Semantic Segmentation,” IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2018 is a technique for improving the accuracy of image segmentation. When performing the dilated convolution, a product-sum operation is performed in accordance with equation (2) below:
O
i,j(n)=Σm=1MΣx=0X−1Σy=0Y−1(li+Dx,j+Dy(m)×Cx,y(m,n)) (1)
where a variable D is the dilation rate of the dilated convolution. When the variable D is 1, equation (2) is the same as equation (1). The larger the value of the variable D, the wider the reference range in a feature image of a preceding layer. After dilation, the reference range changes from (X×Y) to [D×(X−1)+1]×[D×(Y−1)+1]. In this operation, the processing is performed without skipping filter coefficients. To process feature data of a feature image at intervals of (D−1) data, however, feature data in the horizontal direction or the vertical direction are referred to as they are skipped.
In the CNN, the number of times of product-sum operations is large. When applying the CNN to a portable terminal or an embedded system such as an in-vehicle device, therefore, it is necessary to reduce the transfer amounts of feature data and filter coefficients, efficiently perform product-sum operations, and shorten the overall processing time. Japanese Patent Laid-Open No. 2018-67154 has proposed an arrangement that processes a plurality of feature data in parallel.
This method described in Japanese Patent Laid-Open No. 2018-67154 calculates output data in parallel by using a plurality of different feature data and a common filter coefficient. However, when performing processing such as the dilated convolution described in Y. Wei, et al., “Revisiting Dilated Convolution: A Simple Approach for Weakly- and Semi-Supervised Semantic Segmentation,” IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2018, it is impossible to refer to feature data of feature images in a preceding layer while skipping the data. A register for holding feature data must be connected to a register for holding feature data in the skip destination, control and wiring become complicated. In addition, when dilating a filter by increasing the filter size in order to perform the dilated convolution, the filter coefficient transfer amount increases.
The present invention provides a technique for reducing the transfer amount of filter coefficients for use in a filtering process in a case in which the filtering process is performed by extending the range of data to be referred to.
According to the first aspect of the present invention, there is provided a data processing apparatus comprising: a holding unit configured to hold filter coefficients of a transferred filter; a generating unit configured to generate an extended filter by extending a size of the transferred filter while sequentially reading out the filter coefficients held in the holding unit; and an arithmetic unit configured to perform convolution by using filter coefficients of the extended filter.
According to the second aspect of the present invention, there is provided a data processing method to be performed by a data processing apparatus, comprising: holding filter coefficients of a transferred filter, generating an extended filter by extending a size of the transferred filter while sequentially reading out the held filter coefficients; and performing convolution by using filter coefficients of the extended filter.
According to the third aspect of the present invention, there is provided a non-transitory computer-readable storage medium storing a computer program for causing a computer to function as: a generating unit configured to generate an extended filter by extending a size of a transferred filter while sequentially reading out filter coefficients of the transferred filter; and an arithmetic unit configured to perform convolution by using filter coefficients of the extended filter.
Further features of the present invention will become apparent from the following description of exemplary embodiments (with reference to the attached drawings).
Hereinafter, embodiments will be described in detail with reference to the attached drawings. Note, the following embodiments are not intended to limit the scope of the claimed invention. Multiple features are described in the embodiments, but limitation is not made to an invention that requires all such features, and multiple such features may be combined as appropriate. Furthermore, in the attached drawings, the same reference numerals are given to the same or similar configurations, and redundant description thereof is omitted.
First, a hardware configuration example of a data processing apparatus that functions as a filtering apparatus for performing a filtering process on a plurality of data will be explained with reference to a block diagram shown in
An input unit 301 is a user interface such as a keyboard, a mouse, or a touch panel. The user can input various instructions to a CPU 306 by operating the input unit 301.
A storage unit 302 is a memory device for storing various computer programs and data. Examples of the storage unit 302 are a hard disk, a flexible disk, a CD-ROM, a CD-R, a DVD, a memory card, a CF card, a smart medium, an SD card, a memory stick, a xD picture card, and a USB memory. The computer programs stored in the storage unit 302 include a computer program for causing the CPU 306 or a processing unit 305 to execute or control each processing (to be described later) to be performed by the data processing apparatus.
A communication unit 303 performs data communication with an external apparatus. For example, the communication unit 303 can receive, from an external apparatus, various kinds of information to be described later by assuming that they are stored in the storage unit 302, and store the received information in the storage unit 302.
A display unit 304 is a display device having a liquid crystal screen or a touch panel screen, and can display the results of processing performed by the CPU 306 and the processing unit 305 as images and characters. Note that the display unit 304 need not be an internal unit of the data processing apparatus and may also be an external device of the data processing apparatus. In this case, the display unit 304 is connected to the data processing apparatus so that the display unit 304 can communicate with the data processing apparatus by wired or wireless communication. It is also possible to form a touch panel screen by integrating the input unit 301 and the display unit 304.
The processing unit 305 performs a filtering process on data stored in a RAM 308 by performing a product-sum operation on the data by using a filter coefficient, under the control of the CPU 306. Then, the processing unit 305 stores the filtered data (the filtering process result) in a memory device such as the RAM 308 or the storage unit 302.
The CPU 306 executes various processes by using computer programs and data stored in the RAM 308 or a ROM 307. The CPU 306 thus controls the operation of the w % bole data processing apparatus, and executes or controls each processing (to be described later) to be performed by the data processing apparatus. Note that
The ROM 307 stores information requiring no rewriting, such as a boot program and setting data of the data processing apparatus. The RAM 308 has an area for storing a computer program and data loaded from the ROM 307 or the storage unit 302, data received from an external apparatus by the communication unit 303, and the filtering process result output from the processing unit 305. In addition, the RAM 308 has a work area to be used by the CPU 306 or the processing unit 305 when executing various processes. Thus, the RAM 308 can appropriately provide the various areas. Note that a partial area in the RAM 308 can also be used as the storage unit 302.
When the data processing apparatus receives a computer program from an external apparatus via the communication unit 303, the data processing apparatus executes the computer program after storing the program once in the storage unit 302 and then loading the program into the RAM 308, or executes the program by directly loading it into the RAM 308 from the communication unit 303.
An image processing unit 309 reads out an image stored in the storage unit 302 and performs image processing such as range adjustment on the pixel value of each pixel of the image, and outputs the processed image (the image processing result) to the storage unit 302 or the RAM 308, under the control of the CPU 306.
Note that the obtaining sources and output destinations of various kinds of data explained in this embodiment are examples, and do not intend to limit them to the data obtaining sources and output destinations to be explained in this embodiment. Note also that
A configuration example of the hierarchical neural network (CNN) to be used in the filtering process by the processing unit 305 will be explained below with reference to
O
i,j(n)=Σm=1MΣx=0D×(X−1)Σy=0D×(Y−1)(li+x,j+y(m)×C′x,y(m,n)) (1)
In equation (3), variables common to equations (1) and (2) are as described earlier, so an explanation thereof will be omitted. Equation (4) below indicates a filter coefficient C′x,y(m, n):
When the values of x and y are multiples of D, the value of the filter coefficient C′x,y(m, n) is the same as that of a filter coefficient Cx/D,y/D(m, n), and is a significant value (effective coefficient). On the other hand, if the values of x and y are not multiples of D, the value of the filter coefficient C′x,y(m, n) is 0, and this means that the calculation will be omitted. In this case,
└x┘
is a floor function, and outputs a maximum integer equal to or smaller than X
The dilation rate of the dilated convolution in layer 1 is 1. As indicated in a frame 601, therefore, the filter is not dilated (extended) before and after dilation, so the filtering process (convolution) remains unchanged before and after dilation.
The dilation rate of the dilated convolution in layer 2 is 2. As indicated in a frame 602, therefore, the size of the filter after dilation (extension) is 3×3, and (multiple−1)=one 0 is inserted between filter coefficients adjacent to each other in the vertical and horizontal directions in the filter before dilation.
The dilation rate of the dilated convolution in layer 3 is 4. As indicated in a frame 603, therefore, the size of the filter after dilation (extension) is 5×5, and (multiple−1)=three 0s are inserted between filter coefficients adjacent to each other in the vertical and horizontal directions in the filter before dilation.
Next, the generation of a feature image in each layer will be explained. A plurality of feature images in layer 2 are generated by performing a product-sum operation using a plurality of feature images in layer 1 and filter coefficients based on equation (3). Then, a plurality of feature images in layer 3 are generated by performing a product-sum operation using the plurality of feature images in layer 2 and filter coefficients based on equation (3). Subsequently, a plurality of feature images in layer 4 are generated by performing a product-sum operation using the plurality of feature images in layer 3 and filter coefficients based on equation (3).
A configuration example of the processing unit 305 described above will be explained below with reference to a block diagram shown in
A holding unit 402 is a memory for holding each feature data in a feature image read out from the holding unit 408 under the control of the control unit 401. A dilation unit 403 stores a filter transferred from the holding unit 408 into a holding unit 404 under the control of the control unit 401. Then, the dilation unit 403 generates a dilated filter (extended filter) by dilating (extending) the stored filter in accordance with “a dilation rate corresponding to the current layer”, and stores the generated dilated filter in the holding unit 404.
An arithmetic unit 406 performs an arithmetic operation (filtering process) complying with abovementioned equation (3) by using the feature images stored in the holding unit 402 and the dilated filter stored in the holding unit 404.
A processing unit 407 performs an activation/pooling process on the result of the arithmetic operation performed by the arithmetic unit 406, and outputs the result of this activation/pooling process as a feature image in the current layer.
The feature data are held in the holding unit 402 as described above, and moved and output in order. When holding the feature data in a register in the holding unit 402, if a product-sum operation is performed in accordance with equation (2), it is difficult to refer to the feature data while skipping them. In this embodiment, therefore, a product-sum operation of equation (3) is performed by using a dilated filter obtained by dilating a filter.
Detailed configuration examples of the holding unit 402, the holding unit 404, and the arithmetic unit 406 will be explained with reference to a block diagram shown in
The arithmetic unit 406 sets addresses (the storage unit 1104 and the storage unit 1105) for reading out data from the holding unit 402 and the holding unit 404, respectively. Then, a multiplier 1101 of the arithmetic unit 406 performs the multiplication of abovementioned equation (3) by using feature data read out from the address set in the holding unit 402, and a filter coefficient read out from the address set in the holding unit 404. An adder 1102 in the arithmetic unit 406 performs the addition of equation (3) by using the multiplication result from the multiplier 1101, cumulatively adds the result of the addition to the result of addition stored in a storage unit 1103, and stores the sum in the storage unit 1103.
Next, the filtering process of the data processing apparatus according to this embodiment will be explained with reference to a flowchart shown in
Processes in steps S102 to S113 are performed on each layer in the hierarchical neural network. In the example shown in
In step S103, the control unit 401 sets the dilation rate D of the dilated convolution in accordance with the structure information stored in the holding unit 408. In this embodiment, the dilation rate D of the same layer remains the same. However, it is also possible to set different dilation rates D for different feature images even in the same layer, and dilate a filter to be applied to a feature image of interest in accordance with the dilation rate D set for the feature image of interest. It is further possible to divide feature images into a plurality of groups, set the dilation rate D for each group, and dilate a filter to be applied to a feature image of interest in accordance with the dilation rate D set for a group to which the feature image of interest belongs.
Processes in steps S104 to S112 are performed on each feature image (output feature image) in the current layer. In the example shown in
In step S105, the control unit 401 initializes the convolution result stored in the storage unit 1103 of the arithmetic unit 406 to 0. Processes in steps S106 to S109 are performed on each feature image (input feature image) in a preceding layer.
In step S107, the control unit 401 reads out each feature data of the input feature image from the holding unit 408, and transfers the feature data to the holding unit 402. Also, the control unit 401 reads out each filter coefficient of a filter from the holding unit 408, and transfers the filter coefficient to the dilation unit 403.
In step S108, the dilation unit 403 stores the transferred filter in the holding unit 404, generates a dilated filter by dilating the stored filter in accordance with the dilation rate set in step S103, and stores the dilated filter in the holding unit 404. Then, the arithmetic unit 406 performs convolution (a filtering process) complying with abovementioned equation (3) by using the input feature image transferred to the holding unit 402, and the dilated filter stored in the holding unit 404. In step S108, processes in steps S114 to S120 are performed. Details of step S108 will be described later.
When the process has advanced to step S110, the convolution on all input feature images in the preceding layer is complete. In step S110, the processing unit 407 performs an activation process in accordance with equation (5) below, on the result of convolution of all the input feature images in the preceding layer:
In equation (5), f( ) is an activation function, and x is the result of convolution. In this example, the activation function is implemented by using a ReLU (Rectified Linear Unit). However, the activation function is not limited to the ReLU, and can also be implemented by using another nonlinear function or a quantization function. Then, in accordance with information of the layer, the processing unit 407 performs a pooling process based on the activation process result, and adjusts the size of an output feature image as needed.
In step S111, the processing unit 407 stores the output feature image generated in the process in step S110 into the holding unit 402 so as to use this output feature image as an input feature image for obtaining an output feature image in the next layer. By performing the process as described above, each feature image (an output feature image) in the next layer can be generated.
In the processing complying with the flowchart shown in
Details of the convolutional arithmetic operation (steps S114 to S120) using feature data of a feature image and a filter coefficient of a dilated filter in abovementioned step S108 will be explained below.
In step S114, the dilation unit 403 stores a filter transferred from the holding unit 408 into the holding unit 404, and generates a dilated filter by dilating the stored filter in accordance with the dilation rate D set in step S103. More specifically, the dilation unit 403 calculates a filter coefficient C′x,y(m, n) of the dilated filter based on a filter coefficient Cx,y(m, n) of a nondilated filter.
In step S115, the dilation unit 403 stores the dilated filter generated in step S114 into the holding unit 404. Processes in steps S116 to S120 are performed on each set of feature data and a filter coefficient.
In step S117, the arithmetic unit 406 sets an address for reading out data from the holding units 402 and 404, that is, an address corresponding to x and y in equation (4), and determines the order of reading out feature data and filter coefficients.
In step S118, the multiplier 1101 of the arithmetic unit 406 reads out feature data from the address set in the holding unit 402, and reads out a filter coefficient from the address set in the holding unit 404.
A plurality of feature data are held in the plurality of storage units 1104. The holding unit 402 outputs feature data by transferring feature data held in the storage unit 1104 to the adjacent storage unit 1104.
In step S119, the multiplier 1101 of the arithmetic unit 406 performs the multiplication of equation (3) by using the feature data read out in step S118 and the filter coefficient read out in step S118. The adder 1102 of the arithmetic unit 406 performs the addition of equation (3) by using the multiplication result from the multiplier 1101, cumulatively adds the result of the addition to the result of addition stored in the storage unit 1103, and stores the result of the cumulative addition in the same storage unit 1103. The addition result stored in the storage unit 1103 when the process has advanced to step S109 is the result of convolution corresponding to one input feature image, so this convolution result is a target to be processed in step S110.
As described above, the data processing apparatus according to this embodiment can efficiently process a dilated filter while referring to feature data one by one. A frame 501 in
The time axis is expressed by 1 ns to 10 ns. At 1 ns, the product of upper left feature data Ii,j(m) in a feature image and a filter coefficient C0,0(m, n) is calculated and used as the initial value of the cumulative value of convolution. At 2 ns, the product of feature data I′i+1,j(m) of the feature image and a filter coefficient of 0 is calculated and added to the cumulative value. At 3 ns, the product of upper right feature data I′i+2,j(m) of the feature image and a filter coefficient C1,0(m, n) is calculated and added to the cumulative value. At 4 ns to 6 ns, the products of feature data of the feature image and a filter coefficient of 0 are calculated and added to the cumulative value. At 7 ns, lower left feature data I′i,j+2(m) of the feature image and a filter coefficient C0,1(m, n) is calculated and added to the cumulative value. At 8 ns, feature data I′i+1,j+2(m) of the feature image and a filter coefficient of 0 is calculated and added to the cumulative value. At 9 ns, the product of lower right feature data I′i+2,j+2(m) of the feature image and a filter coefficient C′1,1(m, n) is calculated and added to the cumulative value. At 10 ns, the cumulative value is output as the convolution result.
Note that when the filter coefficient is 0, the product of feature data and a filter coefficient of 0 is calculated and added to the cumulative value in the above explanation. To reduce the calculation cost, however, it is also possible to omit the process of calculating the product of feature data and a filter coefficient of 0 and adding the product to the cumulative value.
The CPU 306 obtains the image processing result based on the output result from the final layer (layer 4 in the example shown in
As described above, this embodiment can perform dilated convolution while processing feature data one by one. Also, the filter transfer amount does not increase because not a dilated filter but a nondilated filter is transferred. The effect is particularly large in a CNN that hierarchically performs a large number of convolutions.
In the second embodiment, the difference from the first embodiment will be explained, and the rest is the same as the first embodiment unless otherwise specified. A block diagram of
A holding unit 701 is a memory for holding a filter read out from a holding unit 408 under the control of a control unit 401. From the filter stored in the holding unit 701, a dilation unit 702 generates a dilated filter that is dilated in accordance with a dilation rate corresponding to the current layer, and outputs the dilated filter. An arithmetic unit 406 performs an arithmetic operation (filtering process) complying with abovementioned equation (3) by using feature data stored in a holding unit 402 and a filter coefficient output from the dilation unit 702.
The filtering process to be performed by a data processing apparatus according to this embodiment differs from that of the first embodiment in the following point. In step S107, the control unit 401 reads out each feature data of an input feature image from the holding unit 408, and transfers the data to the holding unit 402. Also, the control unit 401 reads out a filter from the holding unit 408, and transfers the filter to the holding unit 701.
In step S108 of this embodiment, processes in steps S901 to S907 shown in
In step S903, the arithmetic unit 406 sets an address for reading out data from the holding units 402 and 701, that is, an address corresponding to x and y of equation (4), and determines the order of reading out feature data and filter coefficients. In this step, the arithmetic unit 406 notifies the dilation unit 702 of the address corresponding to x and y, only when both of x and y are multiples of a dilation rate D.
In step S904, a multiplier 1101 of the arithmetic unit 406 reads out feature data from the address set in the holding unit 402. Also, if the address is notified from the arithmetic unit 406, the dilation unit 702 reads out a filter coefficient from the notified address in the holding unit 701.
In step S905, the dilation unit 702 outputs the filter coefficient read out from the holding unit 701 if the address is notified from the arithmetic unit 406, or outputs “0” as a filter coefficient if not. A filter formed by arranging the filter coefficients output from the dilation unit 702 in order is “a dilated filter having a size obtained by multiplying a nondilated filter by D”. That is, in this embodiment, a nondilated filter is held in the holding unit 701, and the dilation unit 702 generates a dilated filter from the nondilated filter, and outputs a filter coefficient of the generated dilated filter.
In step S906, the multiplier 1101 of the arithmetic unit 406 performs the multiplication of equation (3) by using the feature data read out in step S904, and the filter coefficient output from the dilation unit 702 in step S905. An adder 1102 of the arithmetic unit 406 performs the addition of equation (3) by using the multiplication result from the multiplier 1101, cumulatively adds the result of the addition to the result of the addition stored in a storage unit 1103, and stores the result of the cumulative addition in the storage unit 1103.
As described above, since the filter coefficient of a nondilated filter is transferred from the holding unit 408 to the holding unit 701, the transfer time is shorter than that when the filter coefficient of a dilated filter is transferred. Also, unlike the first embodiment, the holding unit 701 of this embodiment holds not the filter coefficient of a dilated filter but the filter coefficient of a nondilated filter, so the memory size is reduced compared to that of the first embodiment.
The data processing apparatus according to this embodiment can efficiently process a nondilated filter by referring to feature data one by one. A frame 502 in
The processing order is the same as that of the first embodiment. Since, however, filter coefficients before dilation are held, both x and y are not multiples of the dilation rate D at 2 ns, 4 to 6 ns, and 8 ns, so the filter coefficient is 0. In this embodiment, however, dilated convolution can be performed because filter coefficient of 0 need not be held in a memory.
In the first embodiment, the form in which the processing unit 407 performs the activation process has been explained, but another functional unit can also execute the activation process. For example, the CPU 306 can execute the activation process. This similarly applies to other processes, so the subject of each process is not limited to that explained above.
Also, in
In the first embodiment, the form in which the activation/pooling process is executed in accordance with the network structure of a hierarchical neural network has been explained. However, one or both of the activation and the pooling are omitted case by case.
In the first embodiment, the arithmetic unit 406 has one set of the multiplier 1101, the adder 1102, and the storage unit 1103. However, the arithmetic unit 406 may also have a plurality of sets each including the multiplier 1101, the adder 1102, and the storage unit 1103. In this case, the processing speed can be increased by operating these sets in parallel.
In the first embodiment, an example in which the size (the height and the width) of a nondilated filter is 2×2 has been explained, but the filter size is not limited to this size and can also be any arbitrary size.
In the first embodiment, an example in which the dilation rate D of dilated convolution is 1, 2, or 4 and the filter size is 2×2 has been explained, but they are not limited to these values and may also be any dilated convolution dilation rate and any size.
In the second embodiment, the form in which “0” is outputted as a filter coefficient upon dilation of a filter coefficient has been explained. However, non-adjacent feature data stored in the holding unit 402 may also be able to be read out consecutively. In this case, non-dilated filter coefficients can be read out from the holding unit 701 consecutively and “0” does not need to be outputted.
The numerical values, the arithmetic methods, the process execution timings, and the like used in each of the abovementioned embodiments are merely examples, and do not intend to limit each embodiment to these examples.
Some or all of the above-described embodiments may be combined and used. Some or all of the above-described embodiments may be selectively used.
Embodiment(s) of the present invention can also be realized by a computer of a system or apparatus that reads out and executes computer executable instructions (e.g., one or more programs) recorded on a storage medium (which may also be referred to more fully as a ‘non-transitory computer-readable storage medium’) to perform the functions of one or more of the above-described embodiment(s) and/or that includes one or more circuits (e.g., application specific integrated circuit (ASIC)) for performing the functions of one or more of the above-described embodiment(s), and by a method performed by the computer of the system or apparatus by, for example, reading out and executing the computer executable instructions from the storage medium to perform the functions of one or more of the above-described embodiment(s) and/or controlling the one or more circuits to perform the functions of one or more of the above-described embodiment(s). The computer may comprise one or more processors (e.g., central processing unit (CPU), micro processing unit (MPU)) and may include a network of separate computers or separate processors to read out and execute the computer executable instructions. The computer executable instructions may be provided to the computer, for example, from a network or the storage medium. The storage medium may include, for example, one or more of a hard disk, a random-access memory (RAM), a read only memory (ROM), a storage of distributed computing systems, an optical disk (such as a compact disc (CD), digital versatile disc (DVD), or Blu-ray Disc (BD)™), a flash memory device, a memory card, and the like.
While the present invention has been described with reference to exemplary embodiments, it is to be understood that the invention is not limited to the disclosed exemplary embodiments. The scope of the following claims is to be accorded the broadest interpretation so as to encompass all such modifications and equivalent structures and functions.
This application claims the benefit of Japanese Patent Application No. 2020-042183, filed Mar. 11, 2020, which is hereby incorporated by reference herein in its entirety.
Number | Date | Country | Kind |
---|---|---|---|
2020-042183 | Mar 2020 | JP | national |