The present disclosure relates to an information processing apparatus, a control method for the information processing apparatus, and a storage medium.
In recent years, attention has been drawn to volumetric techniques in which a plurality of shooting apparatuses are installed at different positions and perform synchronized shooting, and a virtual viewpoint image is generated by compositing a plurality of images based on this shooting. With conventional volumetric techniques, a virtual viewpoint image may include unnatural colors due to differences in the generated shape of a subject, noise in the images shot by the respective shooting apparatuses, and the like.
It is disclosed in Japanese Patent Laid-Open No. 2019-106617 that the images used in the composition for such a virtual viewpoint image are displayed in descending order of the contribution percentage.
However, according to the technique described in Japanese Patent Laid-Open No. 2019-106617, a user needs to find an image that causes noise by searching a plurality of displayed images, and it is also necessary to take a measure to cancel the noise by, for example, invalidating this image. Thus, there has been a problem that a task is complex, and a waste occurs because image data outside the noise becomes invalid, for example.
The present disclosure has been made in view of the aforementioned problem, and provides a technique to alleviate the complexity of the search for an image that causes noise, and also to generate a virtual viewpoint image by effectively using image data outside a noise region.
According to one aspect of the present disclosure, there is provided an information processing apparatus comprising: one or more memories storing instructions; and one or more processors executing the instructions to: obtain a first virtual viewpoint image, wherein the first virtual viewpoint image is generated based on a group of images obtained by shooting a subject from a plurality of directions; obtain at least one image coordinate in a specific region in the first virtual viewpoint image; determine, based on a pixel value at the at least one image coordinate, a parameter for generating a virtual viewpoint image; and generate a second virtual viewpoint image using the determined parameter.
Further features of the present disclosure will become apparent from the following description of exemplary embodiments with reference to the attached drawings.
Hereinafter, embodiments will be described in detail with reference to the attached drawings. Note, the following embodiments are not intended to limit the scope of the claimed disclosure. Multiple features are described in the embodiments, but limitation is not made to an disclosure that requires all such features, and multiple such features may be combined as appropriate. Furthermore, in the attached drawings, the same reference numerals are given to the same or similar configurations, and redundant description thereof is omitted.
The present embodiment will be described using an example that, in a system that generates a virtual viewpoint image based on images obtained by shooting a subject from a plurality of directions, generates a virtual viewpoint image from which noise has been removed based on pixel values of noise in a region designated by a user.
Note that images described in the present embodiment are not limited to still images, and the description will be provided under the assumption that the images also include a video that is shot or reproduced over a period of continuous times.
Note that the plurality of shooting apparatuses 100 may not be installed across the entire periphery of the shooting area, or may be installed to face the shooting area only in a partial direction depending on, for example, restrictions on installation positions. Also, the number of shooting apparatuses is not limited to the examples shown in
As shown in
The volumetric data generation apparatus 112 obtains images from the shooting apparatuses 100, generates shooting apparatus information using the obtained images, and stores the shooting apparatus information into a volumetric data storage unit 30. Here, the shooting apparatus information is a set of parameters including the three-dimensional positions of the plurality of shooting apparatuses 100, the directions of the shooting apparatuses in panning, tilting, and rolling directions, and the field-of-view sizes (angles of view) and resolutions of the shooting apparatuses. The shooting apparatus information is calculated in advance in a procedure of known camera calibration. That is to say, a marker is shot simultaneously by the plurality of shooting apparatuses 100, the image coordinates at which the marker has been detected in a plurality of images obtained through the shooting are mutually associated, and the shooting apparatus information is calculated through geometric calculation.
Note that the contents of the shooting apparatus information are not limited to the ones described above. The shooting apparatus information may include a plurality of parameter sets. For example, the shooting apparatus information may be information which includes a plurality of parameter sets that respectively correspond to a plurality of frames composing moving images of the shooting apparatuses, and which indicates the positions and directions of the shooting apparatuses at each of a plurality of continuous time points.
Also, the method of generating volumetric data in the volumetric data generation apparatus 112 is not limited in particular. For example, shape data of a subject 5 may be generated using a volume intersection method described in Japanese Patent Laid-Open No. 2019-106617, and stored into the volumetric data storage unit 30 as volumetric data. Furthermore, the volumetric data storage unit 30 may include a group of images shot by the shooting apparatuses 100.
An information processing apparatus 200 obtains virtual viewpoint image data from a virtual viewpoint image storage unit 10, and obtains image coordinate data from an image coordinate indication apparatus 20. Furthermore, the information processing apparatus 200 obtains volumetric data from the volumetric data storage unit 30. Then, the information processing apparatus 200 updates the virtual viewpoint image data, and also displays a virtual viewpoint image on a display apparatus 300. Here, the virtual viewpoint image data is generated by the information processing apparatus 200 in advance as indicated by step S02020, which will be described later with reference to
The image coordinate indication apparatus 20 is, for example, a mouse, and transmits image coordinate data of a position which has been indicated by a pointer and at which the mouse has been clicked on a later-described operation screen as, for example, integer values along the x- and y-axes that are perpendicular to each other with the lower-left corner of the screen acting as the origin. Alternatively, it may be, for example, a touch panel or the like. The display apparatus 300 is a device such as a liquid crystal display and a tablet terminal, and displays data of the received virtual viewpoint image on a screen.
The information processing apparatus 200 includes a virtual viewpoint image obtainment unit 210, an image coordinate obtainment unit 220, a volumetric data obtainment unit 230, a parameter calculation unit 240, and a virtual viewpoint image generation unit 250.
The virtual viewpoint image obtainment unit 210 obtains, for example, virtual viewpoint image data in a 3-channel RGB format with a full high-definition (HD) resolution. A virtual viewpoint image is not limited to being in this data format, and may have a 4K resolution, be in a 4-channel RGBA format, or be a grayscale image, for example. Furthermore, a virtual viewpoint image may be in a format where it is represented by luminance and a color difference.
The image coordinate obtainment unit 220 obtains image coordinate data indicated by the image coordinate indication apparatus 20 as integer values of x and y. The volumetric data obtainment unit 230 obtains volumetric data from the volumetric data storage unit 30.
The parameter calculation unit 240 calculates virtual viewpoint image generation parameters to be used in the virtual viewpoint image generation unit 250 with use of the obtained virtual viewpoint image, image coordinates, volumetric data, and virtual viewpoint information. The details of these virtual viewpoint image generation parameters will be described later.
It is assumed that, in the initial state, a virtual viewpoint is defined by a virtual camera 6 that is virtually arranged at a position looking up the subject 5 shown in
Note that in a case where the image coordinates have not been indicated and obtained, the virtual viewpoint image generation parameters may be calculated using a method described in Japanese Patent Laid-Open No. 2018-42237. The “virtual viewpoint image generation parameters” according to the present embodiment are equivalent to “weights” described in Japanese Patent Laid-Open No. 2018-42237.
The virtual viewpoint image generation unit 250 generates a virtual viewpoint image based on an image blending technique described in Japanese Patent Laid-Open No. 2018-42237 with use of the volumetric data, the virtual viewpoint information, and the virtual viewpoint image generation parameters calculated by the parameter calculation unit 240. That is to say, with respect to each pixel in the virtual viewpoint image, the group of images of the respective shooting apparatuses 100 are associated based on the shape of the subject, and the color thereof is calculated through blending based on the virtual viewpoint image generation parameters.
The virtual viewpoint image generated here is turned into data in the same data format as the virtual viewpoint image data in the virtual viewpoint image storage unit 10. Then, the virtual viewpoint image data in the virtual viewpoint image storage unit 10 is updated, and is also transmitted to the display apparatus 300.
Next,
The information processing apparatus 200 includes a CPU 211, a ROM 212, a RAM 213, an auxiliary storage apparatus 214, a display unit 215, an operation unit 216, a communication I/F 217, and a bus 218. The CPU 211 realizes each function of the information processing apparatus 200 shown in
Note that the information processing apparatus 200 may include one or more items of dedicated hardware different from the CPU 211, and at least a part of processing executed by the CPU 211 may be executed by the item(s) of dedicated hardware. Examples of dedicated hardware include an application-specific integrated circuit (ASIC), a field-programmable gate array (FPGA), a digital signal processor (DSP), and the like.
The ROM 212 stores, for example, a program that need not be changed. The RAM 213 temporarily stores a program and data supplied from the auxiliary storage apparatus 214, data supplied from the outside via the communication I/F 217, and the like. The auxiliary storage apparatus 214 is composed of, for example, a hard disk drive or the like, and stores various types of data such as image data and sound data. The display unit 215 is composed of, for example, a liquid crystal display, an LED, or the like, and displays, for example, a graphical user interface (GUI) that is intended for a user to operate the information processing apparatus 200. The operation unit 216 is composed of, for example, a keyboard, a mouse, a joystick, a touch panel, or the like, and inputs various types of instructions to the CPU 211 upon receiving a user operation. The CPU 211 operates as a display control unit that controls the display unit 215, and an operation control unit that controls the operation unit 216.
The communication I/F 217 is used in communication with an apparatus that is outside the information processing apparatus 200. For example, in a case where the information processing apparatus 200 is connected to an external apparatus by wire, a cable for communication is connected to the communication I/F 217. In a case where the information processing apparatus 200 has a function of performing wireless communication with an external apparatus, the communication I/F 217 includes an antenna. The bus 218 transmits information by connecting the discrete components of the information processing apparatus 200 to one another. Although it is assumed in the present embodiment that the display unit 215 and the operation unit 216 are present inside the information processing apparatus 200, at least one of the display unit 215 and the operation unit 216 may be present outside the information processing apparatus 200 as another apparatus.
Next, a procedure of processing executed by the information processing apparatus 200 according to the present embodiment will be described with reference to a flowchart of
In step S02020, the information processing apparatus 200 reads out predetermined virtual viewpoint information from a non-illustrated storage apparatus in a state where image coordinates have not been indicated. Then, it calculates virtual viewpoint image generation parameters based on formula (2)-1, which will be described later, and generates a virtual viewpoint image. The generated virtual viewpoint image is displayed on the display apparatus 300. In step S02030, the virtual viewpoint image obtainment unit 210 obtains virtual viewpoint image data from the virtual viewpoint image storage unit 10.
Then, using the image coordinate indication apparatus 20, a user indicates image coordinates in a specific region (noise region) while viewing the virtual viewpoint image via the display apparatus 300.
In step S02040, the image coordinate obtainment unit 220 receives the image coordinate data transmitted from the image coordinate indication apparatus 20. The received image coordinates 22 are presented to the user via the display apparatus 300. The designated image coordinates may indicate not only one point, but also a plurality of arbitrary regions that can be input by maneuvering the mouse. Alternatively, a plurality of sets of coordinates may be collectively designated as a quadrilateral region. In the present embodiment, it is assumed that a quadrilateral region of 5×5 pixels has been designated. Note that the size of the quadrilateral region is not limited to this example.
In step S02050, in response to pressing of a “calc” button 310 on the UI screen by the user, the image coordinate parameter calculation unit 240 starts to calculate virtual viewpoint image generation parameters based on the image coordinate data received in step S02040.
In a system that generates a virtual viewpoint image using volumetric data, pixel values in the virtual viewpoint image are calculated by compositing a group of images shot by the plurality of shooting apparatuses 100. A virtual viewpoint image generation parameter according to the present embodiment is a coefficient by which a pixel value of each element included in the image group is multiplied. That is to say, a pixel value [Ro, Go, Bo] at a pixel position [x, y] in the virtual viewpoint image is calculated using the following formula (1).
Here, I (x, y) and In (x, y) denote pixel values at the pixel position x, y received in step S02040 in the virtual viewpoint image and each element included in the image group, respectively. Also, Ncam denotes the number of the shooting apparatuses 100, and an denotes a coefficient, namely a virtual viewpoint image generation parameter. Furthermore, an image In is an image calculated by projecting pixels on shape data of the subject 5 using only the nth image included in the image group; as indicated by 31 to 35 in
In the present step, a virtual viewpoint image generation parameter an is calculated using the following formulae (2) in accordance with a degree of similarity between I and In.
In a case where the image coordinates have not been indicated, the virtual viewpoint image generation parameter an is calculated using formula (2)-1 based on a function f (θ, dist) as described in Japanese Patent Laid-Open No. 2018-42237. Here, θ denotes an angle formed by a three-dimensional vector extending from the virtual viewpoint to the pixel (x, y) and a three-dimensional vector extending from the nth shooting apparatus 100 to the subject 5.
Also, dist denotes a distance from the image coordinates which are in each image included in the image group and which are obtained by shooting a point on a three-dimensional model projected on the pixel x, y in the virtual viewpoint image to an edge of the image. The function f is designed so that the value thereof increases as θ decreases, that is to say, as the angle formed by the virtual viewpoint and the shooting apparatus decreases. Furthermore, the function f is designed so that the value thereof decreases as the distance dist decreases. Note that the function f is not limited to being designed in the foregoing manner. ΔE is a function indicating a color difference between I and In, and is calculated as a Euclidean distance in the RGB space with use of formula (3).
Note that ΔE is not limited to being designed in the foregoing manner; for example, a color difference calculation formula such as delta E76 may be used instead. In a case where ΔE is smaller than a threshold thΔE, the target is deemed as an outlier that has a color similar to the noise region, and the value of the virtual viewpoint image generation parameter an is reduced (multiplied by 0.1) using formula (2)-2. Otherwise, the target is deemed as an inlier that has a normal color and the value of the virtual viewpoint image generation parameter an is returned as is using formula (2)-3.
Regarding the image coordinates 22, as they have been indicated as the noise region to be removed, the value of the virtual viewpoint image generation parameter an that has been calculated in advance in step S02020 is reduced by multiplying the same by 0.1 using formula (2)-2. The foregoing processing is executed with respect to every set of image coordinates 22, and the virtual viewpoint image generation parameter an is calculated and updated for each set.
As described above, in a case a degree of similarity between a pixel value in a specific region (noise region) of the virtual viewpoint image and a pixel value in the group of images of the respective shooting apparatuses 100 that contributes to determination of such a pixel value is equal to or higher than a threshold (the difference therebetween is equal to or smaller than a threshold), the virtual viewpoint image generation parameter an is calculated using formula (2)-2 so that the extent of contribution becomes small compared to the opposite case. Alternatively, the virtual viewpoint image generation parameter an may be calculated so that the extent of contribution decreases as the degree of similarity increases.
Furthermore, inliers and outliers may be determined using a clustering method such as k-means clustering in the color space, in place of the method of determining inliers and outliers using color differences and a threshold. Moreover, a weight, namely a coefficient may be calculated in addition to the execution of removal of outliers using a robust estimation method such as M-estimation.
In step S02060, the virtual viewpoint image generation unit 250 obtains the volumetric data and the virtual viewpoint image generation parameters an, and calculates pixel values in the virtual viewpoint image in accordance with formula (1). Note that in order to reduce a waste in processing, for example, only the pixel at the image coordinates 22 may be calculated, and pixel values may be updated.
In step S03010, the virtual viewpoint image generation unit 250 outputs the virtual viewpoint image to the display apparatus 300, and the display apparatus 300 updates and displays the virtual viewpoint image 101 on the UI screen. The updated and displayed UI screen is shown in
The foregoing steps may be repeatedly executed multiple times while the user is checking. Furthermore, it is permissible to place an undo button 311 and a redo button 312 on the UI screen, and provide a function of, for example, reading in past cache data held in the RAM 213 and restoring the virtual viewpoint image 101 to the immediately preceding state in response to pressing of these buttons.
As described above, based on a pixel value of noise in a region designated by the user, the virtual viewpoint image generation parameters are calculated so as to reduce the coefficients of elements that have contributed to the determination of such a pixel value, and the virtual viewpoint image is updated; in this way, the influence of noise can be reduced.
Therefore, the complexity of the search for an image that causes noise can be reduced, and in addition, the virtual viewpoint image can be generated by effectively using image data outside the noise region.
The first embodiment has been described using an example in which a noise region is designated, and the influence of noise is reduced by reducing the coefficients of elements that have pixel values similar to a pixel value of the noise region. The present exemplary modification will be described using an example in which an expected pixel value is further designated as opposed to a noise region, and control is performed to become close to the expected pixel value by increasing the coefficients of elements that have pixel values similar to this expected pixel value. A description of constituents similar to those of the first embodiment will be omitted.
Here, I (xin1, yin1) denotes a representative value of pixel values of the virtual viewpoint image 101 at a plurality of sets of image coordinates (xin1, yin1) in the inlier region 23. If the value is close to this representative value, the value of the virtual viewpoint image generation parameter an is increased (multiplied by 1.5) using formula (4)-3; if the value is far from this representative value, the value of an is reduced (multiplied by 0.1) using formula (4)-2. Note that the representative value may be a pixel value at the center of the inlier region 23, or an average value or a median value of pixel values in the inlier region 23.
As described above, in a case where a degree of similarity between a pixel value in the group of images of the respective shooting apparatuses 100 that contribute to determination of a pixel value in a specific region (e.g., the outlier region 22 to be removed as the noise region) and a pixel value in a region (e.g., the inlier region 23 that has an expected pixel value) different from the specific region is equal to or larger than a threshold (the difference therebetween is equal to or smaller than a threshold), the virtual viewpoint image generation parameter an is calculated using formula (4)-3 so that the extent of contribution is large compared to the opposite case. Alternatively, the virtual viewpoint image generation parameter an may be calculated so that the extent of contribution increases as the degree of similarity increases.
Also, in a case where a degree of similarity between a pixel value in the group of images of the respective shooting apparatuses 100 that contribute to determination of a pixel value in a specific region (e.g., the outlier region 22 to be removed as the noise region) and a pixel value in a region (e.g., the inlier region 23 that has an expected pixel value) different from the specific region is lower than a threshold (the difference therebetween exceeds a threshold), the virtual viewpoint image generation parameter an is calculated using formula (4)-2 so that the extent of contribution is small compared to the opposite case. Alternatively, the virtual viewpoint image generation parameter an may be calculated so that the extent of contribution decreases as the degree of similarity decreases.
Here,
As described above, as the inlier region is designated in addition to the outlier region, the cameras for which the virtual viewpoint image generation parameters are to be increased or reduced can be differentiated more clearly, thereby further facilitating the reduction in the influence of noise.
Note that the volumetric data is not limited to the above-described contents, and may be composed of the following elements, for example. That is to say, the volumetric data may be a colored three-dimensional point group, which indicates the three-dimensional shape and colors of the subject 5, and shooting apparatus information. Also, the volumetric data may be configured in the form of a three-dimensional model including both a polygon mesh and a texture image, in place of the three-dimensional point group. Furthermore, in order to perform image-based rendering, the group of images shot by the plurality of shooting apparatuses 100 and shape data of the subject may be configured in the form of a three-dimensional point group that represents only coordinates. For example, a virtual viewpoint image may be generated through image-based rendering based on the pixel values of the group of images of the shooting apparatuses 100 and shape data of the subject 5. In addition, shape data may be in a depth image format with which the distances between the respective shooting apparatuses 100 to the subject 5 are stored on a per-pixel basis as described in Japanese Patent Laid-Open No. 2018-42237.
The present embodiment will be described using an example in which a user operation is further simplified by searching for a peripheral region similar to a region input by the user, and interpolating an input. A description of constituents similar to those of the first embodiment will be omitted.
Next, a procedure of processing executed by the information processing apparatus 200 according to the present embodiment will be described with reference to a flowchart of
In step S02070, the image coordinate update unit 260 calculates color differences, similarly to formula (3), with respect to eight neighboring pixels around the image coordinates (x, y); if the color differences are smaller than a threshold, that is to say, if the pixels are similar, they are deemed as a noise region, and neighboring pixels are further searched repeatedly. The search is continued until there is no more similar pixel in neighboring pixels. Alternatively, the search range may be set at 10 px vertically and horizontally in performing the peripheral search.
Here,
Also,
As described above, the processing range is expanded in the image space or in the time direction by interpolating an input through the search around the image coordinates indicated by the user; thus, similar noise regions can be collectively updated without the user indicating the coordinates of noise each time. Therefore, a user operation can be further simplified.
According to the above-described embodiments, the user needs to indicate a region in a virtual viewpoint image. In contrast, the present embodiment will be described using an example in which a noise region is indicated automatically by constructing and using a training model that learns a region indicated by the user with respect to inputting of a virtual viewpoint image. A description of constituents similar to those of the first embodiment will be omitted.
The training model obtainment unit 270 obtains image coordinate training model data from the image coordinate training model storage unit 40. The image coordinate detection unit 280 obtains virtual viewpoint image data from the virtual viewpoint image storage unit 10, and obtains image coordinate training model data from the training model obtainment unit 270. Then, the image coordinate detection unit 280 detects image coordinates through an inference process that uses a training model, and then transmits the detected image coordinate data to the image coordinate obtainment unit 220.
Next, a procedure of processing executed by the information processing apparatus 200 according to the present embodiment will be described with reference to a flowchart of
In step S02080, the training model obtainment unit 270 obtains an image coordinate training model from the image coordinate training model storage unit 40. The image coordinate training model is a neural network which has been implemented using, for example, an open-source machine learning software library, and which has learned a region that has been indicated by the user in the above-described embodiments as supervisory data. In generating the image coordinate training model, a training model generation unit may be separately added that executes training processing using the region indicated by the user in the first embodiment and virtual viewpoint image data as inputs. In the present embodiment, the image coordinate training model is obtained as a data file from an external recording apparatus, and stored into the image coordinate training model storage unit 40.
In step S02090, the image coordinate detection unit 280 obtains the virtual viewpoint image obtained in step S2030 as an input, and outputs an inference result image using the image coordinate training model. The inference result image is, for example a grayscale image that has the same image size as the virtual viewpoint image and has 8 bits per channel, and the value of each pixel therein indicates the probability that the pixel is a noise region using a tone of 0 to 255.
Here,
In the subsequent step S02040, the image coordinate obtainment unit 220 obtains the inference result image as image coordinate data. In step S02050, parameters are calculated based on the pixel values in the obtained inference result image. That is to say, with respect to pixels (x, y) in the inference result image that have non-zero pixel values, parameters are calculated based on the following formula (5).
Here, Igray denotes a pixel value in the inference result image; formula (5) is designed to subtract a normalized value obtained by dividing this pixel value by 255 from 1, thereby increasing the extent of update of a virtual viewpoint image generation parameters an as the probability of this pixel value being the noise region increases.
Note that the method of using a pixel value in the inference result image is not limited to this; for example, instead of using this method, image coordinate data may be generated by extracting coordinates in the inference result image that have pixel values larger than a predetermined value, and parameters may be calculated using a method similar to that of the first embodiment.
As described above, a noise region can be indicated automatically by constructing and using a training model that learns a region indicated by the user with respect to inputting of a virtual viewpoint image.
Although the above embodiments have been described using an example in which a virtual viewpoint image is rendered using a blending technique, no limitation is intended by this. Alternatively, a virtual viewpoint image may be rendered through, for example, model-based rendering. For example, a virtual viewpoint image may be generated by coloring shape data that has been obtained from the volumetric data storage unit 30 as intermediate data, and rendering the colored shape data on a virtual viewpoint.
In this case, it is necessary to generate a colored three-dimensional point group and a texture image by compositing pixel values in the group of images shot by the shooting apparatuses 100 when coloring the shape data. In generating the colors of the three-dimensional point group and the texture image, the influence of noise can be reduced via a noise region indication method and a parameter calculation method similar to those of the first embodiment. Furthermore, in a case where the shape data is in a format where it cannot be used as is in the generation of the texture image, such as a depth image format, a three-dimensional model generation unit that converts the shape data into a polygon mesh may be separately provided. The three-dimensional model generation unit generates a three-dimensional model of the subject 5 based on pixel values in the group of images of the shooting apparatuses 100, the virtual viewpoint image generation parameters, and shape data of the subject 5.
According to the present disclosure, the complexity of the search for an image that causes noise can be reduced, and in addition, a virtual viewpoint image can be generated by effectively using image data outside a noise region.
Embodiment(s) of the present disclosure can also be realized by a computer of a system or apparatus that reads out and executes computer executable instructions (e.g., one or more programs) recorded on a storage medium (which may also be referred to more fully as a ‘non-transitory computer-readable storage medium’) to perform the functions of one or more of the above-described embodiment(s) and/or that includes one or more circuits (e.g., application specific integrated circuit (ASIC)) for performing the functions of one or more of the above-described embodiment(s), and by a method performed by the computer of the system or apparatus by, for example, reading out and executing the computer executable instructions from the storage medium to perform the functions of one or more of the above-described embodiment(s) and/or controlling the one or more circuits to perform the functions of one or more of the above-described embodiment(s). The computer may comprise one or more processors (e.g., central processing unit (CPU), micro processing unit (MPU)) and may include a network of separate computers or separate processors to read out and execute the computer executable instructions. The computer executable instructions may be provided to the computer, for example, from a network or the storage medium. The storage medium may include, for example, one or more of a hard disk, a random-access memory (RAM), a read only memory (ROM), a storage of distributed computing systems, an optical disk (such as a compact disc (CD), digital versatile disc (DVD), or Blu-ray Disc (BD)™), a flash memory device, a memory card, and the like.
While the present disclosure has been described with reference to exemplary embodiments, it is to be understood that the disclosure is not limited to the disclosed exemplary embodiments. The scope of the following claims is to be accorded the broadest interpretation so as to encompass all such modifications and equivalent structures and functions.
This application claims the benefit of Japanese Patent Application No. 2023-122743, filed Jul. 27, 2023, which is hereby incorporated by reference herein in its entirety.
Number | Date | Country | Kind |
---|---|---|---|
2023-122743 | Jul 2023 | JP | national |