The present invention relates to an image processing apparatus, an image processing method, and a non-transitory computer-readable storage medium.
In ultra-low-light environments found in surveillance applications, there is a need to improve the visibility of target subjects. An extremely high gain is therefore employed when shooting with a camera. While applying a high gain can brighten the shot, doing so also increases the noise. However, in surveillance applications, the visibility of the subject is prioritized even at the expense of the quality of the image.
Noise reduction (“NR” hereinafter) has generally been used to reduce noise contained in shot video. NR may be applied in the camera, or may be applied by an image processing apparatus outside the camera obtaining an image from the camera. Noise reduction processing based on artificial intelligence techniques using deep learning (“DLNR” hereinafter) is also being used in recent years. Although DLNR has been confirmed to be more effective than conventional NR, the scale of the processing is larger, and thus processing time is an issue.
Incidentally, a technique is known which, when using image processing that takes a long time, increases the overall throughput by taking a partial region of an overall image as a region of interest (“ROI” hereinafter) and reducing the processing time by limiting the region subject to image processing to the ROI. An ROI function may be provided to a photographer.
Japanese Patent Laid-Open No. 2021-118403 discloses a technique for switching whether to perform image restoration processing according to a degree of degradation of each of blocks in an input image. According to Japanese Patent Laid-Open No. 2021-118403, since the region where NR is required as image restoration processing is determined and applied as an ROI, it is possible to apply NR only to the minimum necessary area, which reduces the processing time.
Japanese Patent Laid-Open No. 2007-110338 discloses an example in which a noise amount in the ROI and edges of regions near the ROI are calculated, weighting is applied according to the noise amount and the edges, and noise reduction processing is performed according to the weights.
However, with this past technique, noise is estimated only according to the ROI. There have thus been situations where the noise estimation accuracy drops when the ROI is small, and the noise is not properly removed in the noise removal.
Accordingly, the present invention provides an image processing apparatus capable of appropriately removing noise in noise removal, even when a region from which noise is to be removed, such as an ROI, is small.
According to one aspect of the present disclosure, there is provided an image processing apparatus comprising: one or more memories storing instructions; and one or more processors executing the instructions to: execute first setting processing for setting a reduced region in an image; execute second setting processing for setting an estimation region in the image based on the reduced region; execute estimation processing for estimating a noise characteristic of the estimation region; and execute noise reduction processing that reduces noise in the reduced region of the image according to a parameter based on the noise characteristic, wherein in the second setting processing, when a surface area of the reduced region is smaller than a threshold surface area, the estimation region is set to be at least as large as the threshold surface area.
According to another aspect of the present disclosure, there is provided an image processing apparatus comprising: one or more memories storing instructions; and one or more processors executing the instructions to: execute setting processing for setting a reduced region in an image; execute second setting processing for setting an estimation region in the image based on the reduced region; execute estimation processing for estimating a noise characteristic of the estimation region; and execute noise reduction processing that reduces noise in the reduced region according to a parameter based on the noise characteristic, wherein in the second setting processing, in a case where an overlaid region in which another image is overlaid is present in the image, the estimation region is set to a region different from the overlaid region.
According to another aspect of the present disclosure, there is provided an image processing method comprising: setting a reduced region in an image; setting an estimation region in the image based on the reduced region; estimating a noise characteristic of the estimation region; and executing noise reduction processing that reduces noise in the reduced region of the image according to a parameter based on the noise characteristic, wherein in the setting of the estimation region, when a surface area of the reduced region is smaller than a threshold surface area, the estimation region is set to be at least as large as the threshold surface area.
According to another aspect of the present disclosure, there is provided a non-transitory computer-readable storage medium having stored thereon a computer program that, when loaded and executed by a computer, causes the computer to: set a reduced region in an image; set an estimation region in the image based on the reduced region; estimate a noise characteristic of the estimation region; and execute noise reduction processing that reduces noise in the reduced region of the image according to a parameter based on the noise characteristic, wherein in the setting of the estimation region, when a surface area of the reduced region is smaller than a threshold surface area, the estimation region is set to be at least as large as the threshold surface area.
Further features of the present invention will become apparent from the following description of exemplary embodiments (with reference to the attached drawings).
Hereinafter, embodiments will be described in detail with reference to the attached drawings. Note, the following embodiments are not intended to limit the scope of the claimed invention. Multiple features are described in the embodiments, but limitation is not made to an invention that requires all such features, and multiple such features may be combined as appropriate. Furthermore, in the attached drawings, the same reference numerals are given to the same or similar configurations, and redundant description thereof is omitted.
Embodiments of the present invention will be described hereinafter. An image processing apparatus according to the present embodiment estimates noise from an input image obtained from a connected camera, infers a desired output image using a neural network (“NN”, hereinafter), which is a deep learning model, for an image processing target region set as a region of interest (“ROI”, hereinafter), and outputs the inferred image. Note that “image” is a concept including a still image, a moving image, an image from a single frame of a moving image, and the like. The ROI is an example of a “reduced region”. In the following descriptions, “image” is a concept including a video, a still image, and a moving image. “Image” may also indicate data of an image. The image processing apparatus trains the NN, such as by bringing a feature distribution of a plurality of training images prepared in advance closer to the feature distribution of a corresponding target image, and optimizes the parameters, such as weights and biases, of the trained neural network. This enables the image processing apparatus to make accurate inferences even for input images on which the NN has not been trained. The image processing apparatus can perform inference on an input image and generate an inferred image in which noise is reduced by holding the parameters of the trained neural network obtained by training the NN multiple times in accordance with the characteristics of the camera.
The camera 200 generates an image by capturing an image of a light beam incident from a subject field. A lens (not shown) is attached to or built into the camera 200, and includes a zoom lens group, a focus lens group, an iris mechanism, and the like. The camera 200 can change an accumulation time for exposure, and an automatic exposure function is used to apply a gain to shot images when shooting in a dark location. The camera 200 outputs the shot image.
As illustrated in
The image input unit 110 obtains an image output by the camera 200 and saves the image in the memory 140.
The CPU 130 is a central processing unit. Note that in addition to the CPU 130, the image processing apparatus 100 may include a Micro Processing Unit (MPU), a Quantum Processing Unit (QPU), a Graphics Processing Unit (GPU), or the like. The CPU 130 executes the processing executed by the image processing apparatus 100, which will be described later. For example, the CPU 130 implements various functions and executes various processing by executing programs stored in the storage unit 180.
The memory 140 is a random access memory (RAM), for example. The memory 140 temporarily holds image data, programs, and data necessary for executing the programs.
The operation input unit 150 obtains operation signals from an external controller 300, input by a user or the like. The controller 300 includes a keyboard, a mouse, a touch panel, a switch, and the like. The operation signals obtained by the operation input unit 150 are processed by the CPU 130, and setting operations necessary for various types of image processing executed by the image processing unit 160 are performed in response thereto.
The image processing unit 160 reads out and writes images from and to the memory 140, and executes noise estimation processing, ROI processing, NR processing, and UI image generation processing for display in a user interface (“UI” hereinafter) (described later). The image processing unit 160 stores the processed images or the like in the memory 140. The image processing unit 160 may include a Field Programmable Gate Array (FPGA), an Application Specific Integrated Circuit (ASIC), a GPU, or the like. The image processing unit 160 may be implemented as a program executed by the CPU 130.
The image output unit 170 outputs the image processed by the image processing unit 160 and stored in the memory 140 to an external monitor 400. The image output unit 170 outputs an image from a High Definition Multimedia Interface (HDMI; registered trademark) terminal, a Serial Digital Interface (SDI) terminal, or the like of the image processing apparatus 100.
The storage unit 180 includes a hard disk drive (HDD), a solid state drive (SSD), or the like. The storage unit 180 stores programs, data such as parameters necessary for executing the programs, data such as images, and the like.
The monitor 400 receives and displays images output from the image output unit 170 of the image processing apparatus 100. A photographer and a viewer can confirm images shot by the camera 200, menu screens of the image processing apparatus 100, images that have undergone image processing, and the like in the monitor 400.
Although
Image processing performed by the CPU 130 in the image processing apparatus 100 will be described next with reference to the flowchart in
In step S100 of
In step S101 of
In step S102, the CPU 130 executes ROI initialization processing. The CPU 130 sets all pixels of the input image as pixels subject to processing, as initial values for the ROI. For example, if the size of the image input to the image input unit 110 is 1920×1080, the CPU 130 sets starting coordinates (0,0) and ending coordinates (1919,1079) as the initial values for the ROI.
In step S103, the CPU 130 executes NR initialization processing. The CPU 130 loads trained NN parameters, which have been trained in advance, from the storage unit 180 for the NR processing executed by the image processing unit 160. The storage unit 180 holds the trained NN parameters optimized for each model of the camera 200 and for each image quality setting. The CPU 130 sets the NN parameters loaded by the image processing unit 160 in accordance with the various models and image qualities of the camera 200 connected to the image processing apparatus 100.
An NN processed by the image processing unit 160, and NN parameters to be loaded, will be described here with reference to
Although the following will describe an example in which the NN is a Convolutional Neural Network (CNN), the present embodiment is not limited thereto. For example, a Generative Adversarial Network (GAN) or the like may be applied as the NN, or the NN may have skip connections or the like. The NN may also be a recurrent type of network such as a Recurrent Neural Network (RNN) or the like.
In
In the CNN, a feature map of the input image is generated by executing convolution operations on the input image using a given filter. Note that the CNN filter may be of any size. In the next layer, the CNN generates a different feature map by executing convolution operations on the feature map of the previous layer using a different filter. In addition, in each layer, the CNN multiplies a given input signal by the filter to calculate the sum with the biases. Then, the CNN applies an activation function to the calculated result and outputs an output signal at each neuron. The weights and biases in each layer are called the “NN parameters”, and processing for updating the NN parameters is performed in training. A sigmoid function, a ReLU function, and the like are known as examples of activation functions. The CNN of the present embodiment uses the Leaky ReLU function represented by the following Formula (1), but is not limited thereto. Note that in Formula (1), “max” represents a function that outputs the maximum value of the arguments.
In pre-training for obtaining the NN parameters, the image processing unit 160 of the present embodiment uses images having the noise dispersion characteristics of the camera 200 as the training images, and uses images that do not have the noise corresponding to the training images as target images. The noise dispersion characteristics are examples of noise characteristics. The image processing unit 160 implements NR by executing training that sets the NN parameters having paired the training images with the target images.
The feature map 505 focuses on the noise in the input image, and the image processing unit 160 can apply other parameters to the feature map 505 to learn a region of interest in which the noise component is emphasized. The image processing unit 160 calculates an average in a channel direction for input images 601, obtained by dividing the input image 501 on a color-by-color basis as channels, for example, and generates an intermediate layer 602. The image processing unit 160 performs multiple convolutions on the intermediate layer 602 using Formula (1) to obtain intermediate layers 603. The image processing unit 160 performs convolution on the intermediate layers 603 to bring the number of output channels to 1, and obtains an attention layer 604. The attention layer 604 is an intermediate layer in which a feature appears in a noise region excluding a high-frequency component of a subject. The image processing unit 160 obtains an attention layer 606 in which the noise is enhanced by multiplying the attention layer 604 by a noise strength parameter 605 specifying the strength of the noise reduction processing. The image processing unit 160 convolves the attention layer 606 with the input image 501 described above. As a result, the image processing unit 160 can generate an NN capable of NR processing by adjusting, or in other words, emphasizing, the attention level of the noise region included in the input image, among the input image 501 and the input images 601.
The descriptions will now return to
In step S110 of
In step S121, the CPU 130 obtains information regarding the model of the camera 200, selected by a user or the like using the controller 300, through the operation input unit 150. Note that the CPU 130 may obtain the information on the camera 200 selected by the user from a settings menu or the like of the image processing apparatus 100, or may obtain information on the camera 200 automatically detected by the image input unit 110 from information embedded in the signal of the input image.
In step S122, the CPU 130 obtains information regarding gamma applied in the camera 200, selected by a user or the like using the controller 300, through the operation input unit 150. Note that the CPU 130 may obtain the information regarding gamma selected from a settings menu or the like of the image processing apparatus 100, or may obtain information pertaining to the gamma automatically detected by the image input unit 110 from information embedded in the signal of the input image. The information regarding the model of the camera 200 and the gamma information of the camera 200 are examples of camera information.
In step S123, the CPU 130 determines whether the information on the camera 200 has been changed in step S121, or whether the information on the gamma has been changed in step S122. The CPU 130 moves the sequence to step S124 if the information has been changed, and to step S125 if not.
In step S124, the CPU 130 loads and sets the trained NN parameters, based on the model of the camera 200 and the gamma selected in steps S121 and S122, in the image processing unit 160. Suitable NN parameters trained in accordance with the connected camera 200 and the gamma settings thereof are applied to the NN loaded in the image processing unit 160 as a result. In step S124, the image processing unit 160 can perform NR processing by inference through NN, i.e., by loading the input image from the memory 140.
In step S125 of
In step S126 of
In step S127, the CPU 130 loads a threshold surface area β associated with the model of the camera 200 and the gamma selected in steps S121 and S122 from the storage unit 180. The threshold surface area β is the minimum area for obtaining an accuracy of the noise estimation processing in step S140 (described later). For example, the CPU 130 may select and load the threshold surface area β from a table, stored in the storage unit 180, in which the model and gamma of the camera 200 are associated with the threshold surface area β. After performing step S127, the CPU 130 ends the image processing settings subroutine in step S120, and moves the sequence to step S130 of
In step S130 of
In step S131, the CPU 130 compares the surface area a calculated in step S126, i.e., through Formula (2), with the threshold surface area β obtained in step S127. The CPU 130 moves the sequence to step S132 if the surface area a is lower than the threshold surface area β.
In step S132, the CPU 130 calculates a ratio γ. Here, if the surface area α of the set ROI is lower than the threshold surface area β and the ROI is taken as a noise estimation region, which is a region for noise estimation processing, the accuracy of the noise estimation processing (described later) may drop. Accordingly, in the present embodiment, the CPU 130 calculates the ratio γ as indicated by Formula (3) for the surface area α of the ROI, so as to secure a noise estimation region that satisfies the threshold surface area Bβ. The CPU 130 calculates the square root of the value obtained by dividing the threshold surface area β by the surface area α as the ratio γ.
In step S133 of
In step S134, the CPU 130 calculates a new ratio γx and a new ratio γy. Here, there may be situations where the logical sum of Formulas (4) and (5) is true, i.e., the width W1 and the height H1 of the noise estimation region calculated by the CPU 130 so as to satisfy the threshold surface area β exceed the width W0 and the height H0 of the input image. In this case, the CPU 130 calculates a new ratio γx and a new ratio γy for taking the width W0 or the height H0 of the input image as an upper limit.
The CPU 130 calculates the ratio γx and the ratio γy through any of the following Formulas (6) to (9), associated with the following conditions. If Formula (4) is true in step S133, the CPU 130 calculates the ratio γx and the ratio γy through Formulas (6) and (7). If Formula (5) is true in step S133, the CPU 130 calculates the ratio γx and the ratio γy through Formulas (8) and (9).
If Formula (4) is true (W1>W0):
If Formula (5) is true (H1>H0):
In step S135, the CPU 130 calculates a width W2 and a height H2 of a noise estimation range, serving as a range of the noise estimation region, through the following Formulas (10) and (11), based on the ratio γx and the ratio γy newly calculated in step S134. The area of the region indicated by the width W2 and the height H2 obtained through Formulas (10) and (11) is at least the threshold surface area β.
In step S136, the CPU 130 converts the width W2 and the height H2 into starting coordinates (X3, Y3) and ending coordinates (X4, Y4) of the noise estimation region, and performs out-of-region determination. In other words, the CPU 130 determines whether at least part of the noise estimation region is outside the input image. Note that if step S135 is not performed, i.e., when a determination of “No” is made in step S133, the CPU 130 may assume that W2=W1 and H2=H1. The CPU 130 determines whether a part is outside the region of the image based on Formulas (12) to (15) indicated below.
If the logical sum of Formulas (12) to (15) is true, the CPU 130 moves the sequence to step S137, whereas if the logical sum is false, the subroutine of step S130 of
In step S137, based on the result of the determination in step S136, the CPU 130 shifts the starting coordinates (X3, Y3) and the ending coordinates (X4, Y4) of the noise estimation region such that the region remains within the region of the input image. The CPU 130 calculates the starting coordinates (X5, Y5) and the ending coordinates (X6, Y6) of the shifted noise estimation region according to the following conditions and Formulas (16) to (23).
X5 and X6 setting conditions:
When W2=W0, X5=0, X6=W0−1 . . . Formula (16)
When Formula (12) is true (X3<0), X5=0, X6=W2−1 . . . Formula (17)
When Formula (14) is true (X4>W0−1), X5=W0−W2, X6=W0−1 . . .Formula (18)
Under other conditions, i.e., when Formula (12) is false (X3≥0) and Formula (14) is false (X4≤W0−1), X5=X3, X6=X4 . . . Formula (19)
Y5 and Y6 setting conditions:
When H2=H0, Y5=0, Y6=H0−1 . . . Formula (20)
When Formula (13) is true (Y3<0), Y5=0, Y6=H2−1 . . . Formula (21)
When Formula (15) is true (Y4>H0−1), Y5=H0−H2, Y6=H0−1 . . . Formula (22)
Under other conditions, i.e., when Formula (13) is false (Y3≥0) and Formula (14) is false (Y4≤H0−1), Y5=Y3, Y6=Y4 . . . Formula (23)
The CPU 130 moves the sequence to step S138 when a determination of “No” is made in step S131, i.e., if the surface area α of the ROI is at least the threshold surface area β. The condition that the surface area α of the ROI is at least the threshold surface area is satisfied in step S138, and the CPU 130 therefore sets the coordinates of the noise estimation region according to the following Formulas (24) to (27).
Here,
After performing step S137 or step S138 and determining the coordinates of the noise estimation region, the CPU 130 ends the subroutine in step S130 of
In step S140 of
In step S141 of
In step S142, the image processing unit 160 calculates a luminance and noise dispersion for each block.
In step S143, the image processing unit 160 extracts a region of a flat part in the noise estimation region. Specifically, the image processing unit 160 determines whether the noise dispersion in each block is no greater than a predetermined threshold, and extracts the region of the flat part. For example, when the image processing unit 160 determines that the noise dispersion of the block is no greater than the threshold, that block is determined to be a flat part, and 1 is set for that block, as illustrated of
In step S144, the image processing unit 160 collects statistical values for luminance-to-noise dispersion only for the blocks determined to be flat parts, and plots the luminance-to-noise dispersion characteristics.
In step S145, the image processing unit 160 loads noise dispersion characteristic data corresponding to the gamma characteristics currently set from the storage unit 180. The noise dispersion characteristic data is stored in the storage unit 180 after measuring the luminance-to-noise dispersion characteristics for each of gain values in advance according to the connected camera 200 and the set gamma. For example, table data having luminance-to-noise dispersion characteristics, such as that illustrated in
In step S146, the image processing unit 160 selects the noise dispersion characteristics for each gain held in the loaded noise dispersion characteristics table.
In step S147, the image processing unit 160 measures a distance between the dispersion value at each luminance plotted in step S144 and the noise dispersion value indicated by the noise dispersion characteristic data based on the gain selected in step S146, and determines whether the characteristics match. The image processing unit 160 may determine a match when the distance is no greater than a predetermined threshold. If the noise dispersion characteristics based on the gain do not match in step S147, i.e., a determination of “no” is made, the image processing unit 160 refers to the dispersion value of the noise dispersion characteristic data for the next gain, and determines whether the noise dispersion characteristics based on the gain match. The image processing unit 160 moves the sequence to step S148 if the noise dispersion values are determined to match, i.e., if a determination of “yes” is made. The gain selected in step S146 and determined to match the noise dispersion characteristics in step S147 can be estimated to have been input to the image processing apparatus 100.
Various scenes are captured by the camera 200 and input to the image processing apparatus 100. In step S130 of
Returning to the description of
In step S151 of
In step S152, the image processing unit 160 loads and obtains the input image, obtained in step S110 of
In step S153, the image processing unit 160 reads out information such as the coordinates of the set ROI.
In step $154, the image processing unit 160 executes NR inference processing performed by the NN limited to the read-out ROI information.
In step S155, the image processing unit 160 generates a result image inferred through the NR inference processing.
In step S156, the image processing unit 160 composites the image obtained as a result of the NR inference, obtained in step S155, with the input image used for the inference, i.e., the input image illustrated in
In step S160 of
The present embodiment describes an example in which noise dispersion characteristics corresponding to a gain set in the camera 200 are estimated by evaluating an NR application range, which is limited by ROI settings made by a user or the like, against the threshold surface area β, and setting a noise estimation region that is at least the threshold surface area β. The present embodiment makes it possible to appropriately select the NR strength parameter for the NN based on the estimated noise dispersion characteristics according to the gain of the camera 200, which makes it possible to obtain an appropriate NR processing result even for input images containing a large amount of noise, such as images from a low-light environment.
By appropriately setting the noise estimation region in this manner, the present embodiment makes it possible to suppress situations where the result of estimating the noise amount is lower than the desired noise amount, improve the NR effect, and reduce the noise remaining in the input image. Additionally, by appropriately setting the noise estimation region, the present embodiment makes it possible to suppress situations where the result of estimating the noise amount is higher than the desired noise amount, suppress a drop in resolution and loss of textures due to the NR effect being too strong, and improve the image quality.
In the present embodiment, the noise estimation region for estimating the noise dispersion characteristics is set to include at least a part of the ROI for reducing the noise, and thus appropriate noise dispersion characteristics, corresponding to the noise of the ROI, can be estimated.
In the present embodiment, when the ROI is at least the threshold surface area β, the ROI is set as the estimation region, and thus noise dispersion characteristics corresponding to the noise of the ROI can be estimated.
In the present embodiment, the noise estimation region is set using the threshold surface area β corresponding to information on the model and gain of the camera 200, and thus the noise estimation region can be set according to the state of the camera 200.
In the present embodiment, the noise estimation region is set based on the width and height of the provisional noise estimation region, and thus the noise estimation region can be set while maintaining the aspect ratio of the ROI.
In the present embodiment, when the noise estimation region is outside the input image, the noise estimation region is shifted, and thus the noise estimation region can be set within the input image to estimate appropriate noise dispersion characteristics.
Although the foregoing has described a preferred embodiment of the present invention, the present invention is not intended to be limited to the specific embodiment, and all variations that do not depart from the essential spirit of the invention are intended to be included in the scope of the present invention. Parts of the above-described embodiment may be combined as appropriate.
The foregoing embodiment described the noise estimation region that satisfies the threshold surface area β as being calculated as illustrated in
The present invention also includes a case where a software program that realizes the functions of the foregoing embodiment is supplied to a system or apparatus having a computer capable of executing the program, directly from a storage medium or using wired/wireless communication, and the program is executed.
Accordingly, the program code itself, supplied to and installed in a computer so as to realize the functional processing of the present invention through a computer, also realizes the present invention. In other words, the computer program itself, for realizing the functional processing of the present invention, is also included within the scope of the present invention.
In this case, the program may be in any form, and object code, a program executed through an interpreter, script data supplied to an OS, or the like may be used, as long as it has the functionality of the program.
Examples of the recording medium that can be used to supply the program include a hard disk, a magnetic recording medium such as magnetic tape, an optical/magneto-optical storage medium, and a non-volatile semiconductor memory.
Additionally, it is conceivable, as the method for supplying the program, to store a computer program embodying the present invention in a server on a computer network, and for a client computer having a connection to download and run the computer program.
Embodiment(s) of the present invention can also be realized by a computer of a system or apparatus that reads out and executes computer executable instructions (e.g., one or more programs) recorded on a storage medium (which may also be referred to more fully as a ‘non-transitory computer-readable storage medium’) to perform the functions of one or more of the above-described embodiment(s) and/or that includes one or more circuits (e.g., application specific integrated circuit (ASIC)) for performing the functions of one or more of the above-described embodiment(s), and by a method performed by the computer of the system or apparatus by, for example, reading out and executing the computer executable instructions from the storage medium to perform the functions of one or more of the above-described embodiment(s) and/or controlling the one or more circuits to perform the functions of one or more of the above-described embodiment(s). The computer may comprise one or more processors (e.g., central processing unit (CPU), micro processing unit (MPU)) and may include a network of separate computers or separate processors to read out and execute the computer executable instructions. The computer executable instructions may be provided to the computer, for example, from a network or the storage medium. The storage medium may include, for example, one or more of a hard disk, a random-access memory (RAM), a read only memory (ROM), a storage of distributed computing systems, an optical disk (such as a compact disc (CD), digital versatile disc (DVD), or Blu-ray Disc (BD)™), a flash memory device, a memory card, and the like.
While the present invention has been described with reference to exemplary embodiments, it is to be understood that the invention is not limited to the disclosed exemplary embodiments. The scope of the following claims is to be accorded the broadest interpretation so as to encompass all such modifications and equivalent structures and functions.
This application claims the benefit of Japanese Patent Application No. 2023-158508, filed Sep. 22, 2023, which is hereby incorporated by reference herein in its entirety.
| Number | Date | Country | Kind |
|---|---|---|---|
| 2023-158508 | Sep 2023 | JP | national |