The present disclosure relates to an information processing apparatus, an information processing method, and a storage medium.
There is a system that distributes streaming content constituted by speech data, video data, and the like in real time so as to allow the user to view such content via a terminal apparatus of his/her own. In this case, the terminal apparatus has various functions and plays back content in various environments. For this reason, there is a demand for an adaptive technology for content playback with respect to environments.
According to one embodiment of the present disclosure, an information processing apparatus comprises: a generating unit configured to generate a playlist including a network address that is referred to for acquisition of an image, region information defining a spatial partial region in the image, and annotation information that is information to be displayed in association with the partial region; and a sending unit configured to send the playlist generated by the generating unit.
According to another embodiment of the present disclosure, an information processing apparatus comprises: a receiving unit configured to receive a playlist including a network address that is referred to for acquisition of an image, region information defining a spatial partial region in the image, and annotation information that is information to be displayed in association with the partial region; an analyzing unit configured to analyze the received playlist; an acquiring unit configured to acquire the image corresponding to the network address based on the analysis result; and a display unit configured to display the partial region and the annotation information while superimposing the partial region and the annotation information on the image.
According to still another embodiment of the present disclosure, an information processing method comprises: generating a playlist including a network address that is referred to for acquisition of an image, region information defining a spatial partial region in the image, and annotation information that is information to be displayed in association with the partial region; and sending the generated playlist.
According to yet another embodiment of the present disclosure, an information processing method comprises: receiving a playlist including a network address that is referred to for acquisition of an image, region information defining a spatial partial region in the image, and annotation information that is information to be displayed in association with the partial region; analyzing the received playlist; acquiring the image corresponding to the network address based on the analysis result; and displaying the partial region and the annotation information while superimposing the partial region and the annotation information on the image.
According to yet still embodiment of the present disclosure, a non-transitory computer-readable storage medium stores a program which, when executed by a computer comprising a processor and a memory, causes the computer to: generate a playlist including a network address that is referred to for acquisition of an image, region information defining a spatial partial region in the image, and annotation information that is information to be displayed in association with the partial region; and send the generated playlist.
Further features of the present disclosure will become apparent from the following description of exemplary embodiments (with reference to the attached drawings).
Hereinafter, embodiments will be described in detail with reference to the attached drawings. Note, the following embodiments are not intended to limit the scope of the claimed disclosure. Multiple features are described in the embodiments, but limitation is not made to a disclosure that requires all such features, and multiple such features may be combined as appropriate. Furthermore, in the attached drawings, the same reference numerals are given to the same or similar configurations, and redundant description thereof is omitted.
Playlists, which are files distributed for the purpose of distributing an arbitrary image and different from image data have not been configured to be provided with annotation information to be displayed in association with a partial region of a video.
The present disclosure has an object to provide a file different from image data used for the distribution of an image with annotation information associated with a partial region in the image.
The information processing apparatus 100 generates a playlist including a network address to be referred to for the acquisition of an image and sends the playlist together with the image to the receiving apparatus 110. The information processing apparatus 100 can be, for example, a camera, a video camera, a portable terminal such as a smartphone, a PC (Personal Computer), or a cloud server. However, the information processing apparatus 100 is not limited to them as long as the apparatus can execute each function to be described later. Note that an image to be transmitted in this case may be a moving image (video) but indicates one still image for the sake of descriptive convenience.
The receiving apparatus 110 receives data from the information processing apparatus 100. The receiving apparatus 110 includes a playback/display function for content such as an image and may accept an input from the user. As the receiving apparatus 110 according to this embodiment, a desired electronic device, for example, a portable terminal such as a smartphone, a PC, or a TV set can be used.
The network 120 can be any one of various types of networks such as the Internet/intranet or LAN (Local Area Network)/WAN (Wide Area Network). The wired communication interface can be an interface complying with the Ethernet® standards but may be another type of interface. The wireless communication interface may be an interface complying with the wireless LAN standards complying with the IEEE802.11 standard series or an interface complying with WAN, Bluetooth® standards such as 3G/4G/LTE standards. Note that as a wireless connection form, a connection form in an infrastructure network or a connection form in an adhoc network may be used. In addition, the network 120 may be a combination of a wired communication path and a wireless communication path. That is, the network 120 may have an arbitrary form as long as it establishes connection between the information processing apparatus 100 and the receiving apparatus 110 and allows communication between them.
This embodiment uses standards called MPEG (Moving Picture Experts Group)—DASH (Dynamic Adaptive Streaming over Http) of ISO/IEC23009-1. Assume in the following description that each process such as playlist generation processing (to be described later) is performed by using MPEG-DASH standards.
MPEG (Moving Picture Experts Group)—DASH (Dynamic Adaptive Streaming over Http) standards will be described below. MPEG-DASH is video distribution standards that can dynamically change acquired streams.
MPEG-DASH can divide media data into segments each having a predetermined time length and describe URLs (Uniform Resource Locators) for acquiring segments in a file called a playlist. The receiving apparatus can acquire this playlist first and then acquire a desired segment by requesting it from the sending apparatus using information described in the playlist. In addition, describing URLs for segments in a plurality of versions different in bit rate and resolution in the playlist allows the receiving apparatus to acquire a segment in an optimal version in accordance with the performance of the receiving apparatus itself, a communication environment, and the like. ISO Base Media File Format (to be referred to as ISOBMFF hereinafter) is used as the file format of the segment.
The configuration of ISOBMFF is roughly divided into a portion storing header information and a portion storing encoded data. The header information includes information indicating the size and time stamp of the encoded data stored in the segment. As the encoded data, a moving image, still image, speech data, or the like can be stored. ISOBMFF includes a plurality of extension standards according to the types of encoded data to be stored. One of the extension standards is HEIF (High Efficiency Image File Format) standardized by MPEG. HEIF is in the process of standardization under the title of “Image File Format” in ISO/IEC 23008-12 (Part12) and defines specifications for the storage of still images encoded by HEVC (High Efficiency Video Coding), which is a codec mainly used for moving images, and an image sequence. In addition, ISOBMFF can store metadata such as a text or XML, other than media data such as the above moving images and store the meta data not only as static information but also as dynamic information. In particular, metadata having information in a time-series manner is called timed metadata, which is typically subtitle data.
The analyzing unit 101 analyzes the structure of a data file. Assume that in the following description, the data file to be analyzed by the analyzing unit 101 has the HEIF file format. The extracting unit 102 extracts metadata and encoded data stored in the data file based on the analysis result on the data file obtained by the analyzing unit 101.
The generating unit 103 divides the metadata and the encoded data extracted by the extracting unit 102 into data each having a time length suitable for communication as needed or changes the bit rates, thereby generating segments storing the respective data. The converting unit 104 can convert extracted encoded data into a different coding format as needed. Note that the generating unit 103 may store encoded data converted by the converting unit 104 in a segment. The storing unit 105 stores the data generated by the generating unit 103.
The generating unit 106 generates a playlist including a network address to be referred to by the receiving apparatus 110 to acquire data stored in the storing unit 105 based on an analysis result on a data file. The playlist includes region information defining a partial region on an image included in the data file and annotation information as information displayed in association with the partial region.
As a network address included in a playlist in this case, a URI (Uniform Resource Identifier) is basically used. The generating unit 106 may describe a URL or an Internet or LAN IP address as a network address. The format of the network address is not specifically limited as long as it can describe the location of the data.
A partial region can be set on an image by an arbitrary technique. For example, the generating unit 106 may perform image analysis processing for an image input to the information processing apparatus 100 and set a region satisfying a predetermined condition as a partial region. For example, when a predetermined object is detected by image analysis, the generating unit 106 may set a bounding box indicating the object as a partial region defined by region information. In addition, when, for example, a predetermined event is detected by context analysis, the generating unit 106 may set a region in which the predetermined event has occurred as a partial region. Alternatively, the generating unit 106 may accept an input from the user and set a region designated by the user as a partial region. Although the position and shape of a partial region and the manner of how the partial region is described in a playlist are not specifically limited, the details of them will be described later with reference to
Annotation information is information to be displayed in association with a partial region as described above. For example, annotation information is information to be displayed while being superimposed on an image in association with a partial region like annotation information 1 to 3 in
The generating unit 106 includes, in a playlist, annotation information to be displayed in association with a partial region on still image data constituting video data stored in a HEIF file based on the analysis result obtained by the analyzing unit 101. The configuration of a HEIF file analyzed by the analyzing unit 101 will be described with reference to
In this embodiment, the analyzing unit 101 analyzes nested boxes constituting a HEIF file and acquires each piece of information included in the image file by using the extracting unit 102. In this embodiment, each box of a HEIF file is identified by a four-character identifier and stores information for each use. In the following description, each box is represented by a four-character identifier assigned to the box. In the example shown in
The meta (MetaDataBox) 301 is a box storing meta data and includes, as boxes, hdlr, dinf, iloc 305, iinf 303, iref 304, pitm, iprp 306, ipma 307, and idat 308. The meta 301 can store various types of information such as information concerning the ID of each item of each of image and speech files, and information concerning the encoding of media data or information concerning a method of storing the media data in the HEIF file.
Although item data can be stored in mdat 302, the data may be stored in the idat 308 in the meta 301. In the case shown in
The hdlr (HandlerRferenceBox) stores handler type information for identifying the structure and format of content included in meta.
The iinf (ItemInformationBox) 303 stores information indicating the identifiers for identifying all stored items, including the image items of the images in the HEIF file, and the types of items. Item information is information indicating the ID (item ID) of each item in the HEIF file, an item type indicating the type of the item, and the name of the item. The iinf 303 can also store item information when region information indicating a partial region in the Exif data generated when image data is captured by a digital camera or image data stored as an item is stored as an item.
The iref (ItemReferenceBox) 304 is a box storing association information between items and stores, for example, association information between a still image and Exif data or between a still image and region information and defines a reference type according to the relationship of association between items. For example, as a type of association between items concerning the region information, cdsc intending to provide an item at the reference destination with explanatory information is defined. In this embodiment, association information includes information indicating annotation information displayed in association with a partial region of video data (a constituent image). In addition, association information may include association information between image data and Exif data.
The iloc (ItemLocationBox) 305 stores information indicating the ID of each of items such as an image and its encoded data in the HEIF file (that is, the identification information of each image) and a storage place (location). In each process performed by the information processing apparatus 100, information indicating where item data defined in the HEIF file is located can be acquired by referring to the iloc 305. The iloc 305 includes a construction method as information indicating the storage place of each item. For example, when the reference type defined by the iref 304 is cdsc, “1” indicating that the storage place of the item is the idat 308 is generally often defined as a construction method. In the example shown in
The iprp (ItemPropertyBox) 306 stores the attribute information of an image in the image file. Accordingly, the iprp 306 includes an ipco box and an ipma box. Attribute information is information concerning the display of an image, such as the width and height of the image and the number and bit length of color components. In the example shown in
The ipma (ItemPropertyAssociationBox) 307 stores information indicating association between the information stored in ipco and an item ID. In the example shown in
Subsequently, the relationship between the items and the properties stored in the HEIF file having the configuration described with reference to
In step S501, the information processing apparatus 100 acquires an HEIF file as an analysis target. In this case, the information processing apparatus 100 acquires an HEIF file from, for example, an imaging device (not shown). In step S502, the analyzing unit 101 acquires item IDs as the identifiers of the respective items included in the HEIF file and item types by analyzing the file.
In step S503, the analyzing unit 101 acquires a reference relationships including reference types between the items based on the item IDs with reference to the ipma 307. In step S504, the analyzing unit 101 acquires properties associated with the respective items.
In step S505, the analyzing unit 101 determines whether any of the acquired items includes region information indicating a partial region in the items obtained in step S502. If YES in step S505, the process advances to step S506. If NO in step S505, the processing is terminated upon determining that there is no annotation information.
In step S506, the analyzing unit 101 determines whether a property associated with at least one region information item includes annotation information. If YES in step S506, the process advances to step S507. If NO in step S506, the processing is terminated upon determining that there is no annotation information.
In step S507, the generating unit 103 generates segments for distribution. In this case, when, for example, a plurality of items are stored in a HEIF file, the generating unit 103 generates one file for each still image item. In step S508, the generating unit 106 generates a playlist based on annotation information and terminates the processing.
An example of a playlist generated by the generating unit 106 will be described next with reference to
A playlist 600 shown in
In the example shown in
The generating unit 106 can describe the coordinates of partial region following the shape of the partial region. In this case, the description of coordinates differs in number and meaning according to the shape of a partial region. For example, when a partial region has a point shape, the generating unit 106 may describe, as coordinate information, one parameter indicating vertical and horizontal coordinates (XY coordinates) within the main image. In the example shown in
In the region information 607 representing a rectangular partial region, two parameters indicating the horizontal and vertical sizes of the rectangle may be described in addition to a parameter indicating the coordinates of the upper left corner of the rectangle. In the region information 608 representing a circular partial region, three parameters indicating the center coordinates of the circle and the radius length may be described. In addition, adding a rotation angle as a parameter can define region information with an ellipse angle. The shapes of partial regions are not limited to those described above, and any desired shape can be used as long as the shape can be represented by parameters. Note that when a plurality of partial regions include partial regions having an identical shape, the generating unit 106 may describe such partial regions as one element.
With regard to association between the main image and the annotation information, the generating unit 106 sets the representation ID of the main image in “associationId” as the attribute information of the representation of the annotation information. The generating unit 106 describes a type indicating the attribute of the annotation information in “associationType”. In this case, “cdsc” is set in “associationType” to indicate the annotation information with respect to the main image. In addition, since the sub-image is an image associated with a partial region of annotation information 603, “eroi” is set in “associationType” of the annotation information 603.
In a playlist 700, region information 701 and region information 702 indicate the same region. In the example shown in
In the example shown in
Annotation 2 indicates that the partial region has an arbitrary shape by setting a first value 804 of “value=” to “5”. In this case, four succeeding values 805 are parameters representing a region into which the partial region is fitted, that is, representing the coordinates of the upper left corner of the arbitrary region and the horizontal and vertical sizes of the region. A succeeding value 806 is a value to be referred to when generating a reduced image by pixel integration of pixel-by-pixel information represented by a bit mask. In this case, as indicated by a pixel integration example 820, “2” is described as a value indicating a mask that reduces an image by integrating two adjacent pixels into one pixel. Generating such mask data can reduce the amount of data to about ¼. This pixel integration method may be arbitrarily set. Although “2” is described as a value to be applied to both the numbers of pixels to be integrated in the vertical and horizontal directions in the pixel integration example 820, different values may be described in the respective directions. In this case, different values may be described as one parameter in the form of, for example, “n x m” where n is the value in the vertical direction and m is the value in the horizontal direction or may be described as two parameters in the form of, for example, “n”, “m”. In the playlist 800, “mask data” set as a representation ID 808 of the mask data is described in a last value 807 of the region information parameters of annotation 2, thereby associating the region information of annotation 2 with the mask data.
Note that, according to MPEG-DASH, in order to acquire identical content while dynamically changing the bit rate or resolution, it is possible to prepare streams with different bit rates or different resolutions and describe URLs that allow the acquisition of the respective streams in MPD. This configuration makes it possible to use content with a bit rate or resolution suitable for a communication band or the processing performance of a client. In the examples shown in
A processing example for making the scaling of region information compatible with a change in the resolution of a video in consideration of the above case will be described with reference to
As indicated by a display example 910, a playlist 900 is a description of data that associates an annotation image with one partial region of a main image 901. In the example shown in
Referring to
Referring to
Note that in this example, since three main images with different resolutions are prepared, different representation IDs corresponding to the respective images are prepared. The example in
An example of associating one piece of annotation information with a plurality of different partial regions will be described next with reference to
A playlist 1000 is the description of data with the same annotation information associated with a plurality of partial regions in a main image as indicated by annotation 1 and annotation 2 in a display example 1010. In this example, annotation 1 is associated with three rectangles 1, 4, and 6 as partial regions, and annotation 2 is associated with rectangles 2 and 3 and circle 5.
In the playlist 1000, following the first value (representing the shape of the partial region) of “value=” of the region information described as the attribute information of the annotation information, a value 1001 indicating the number of corresponding partial regions is described. In the example shown in
In this case, the partial regions with which the same annotation information is associated may have different shapes. In the example shown in
An example of displaying a plurality of image data in combination as a main image will be described with reference to
In this case, the generating unit 106 can describe a main image 1101 by using SRD (Spatial Relationship Description), which is defined by MPEG-DASH and a technique of spatially arranging an image or video. In this case, for images 1 to 4, the representation IDs of image1 to image4 are defined. Assume that the respective images constituting the main image are arranged in the main image by a description similar to that for the partial regions in
Note that the images constituting a main image need not have the same size and need not be arranged in a tile pattern as in
An example of providing annotation information with tag information to improve the search performance and the convenience of controlling and managing information will be described with reference to
In a playlist 1200, there are six partial regions 1 to 6 provided with annotation information on a main image, and common tags are provided to the pieces of annotation information with the same attributes. In the example shown in
A display example 1210 is an example of displaying information described in the playlist 1200. The generating unit 106 may generate a playlist so as to display only annotation information having a specific tag or color-coded display the information in consideration of a case in which, for example, when all the pieces of annotation information are superimposed and displayed on the main image, the resultant display becomes complicated.
According to such configuration, it is possible to generate and transmit a playlist including a network for the acquisition of an image, region information defining a partial region on the image, and annotation information as information to be displayed in association with the partial region. Therefore, it is possible to generate a playlist for displaying annotation information in a partial region with respect to an input video and send the playlist to the user who requires the annotation information.
The information processing apparatus according to the first embodiment causes the generating unit 106 to generate a playlist including region information defining a partial region and annotation information. In contrast to this, the second embodiment externally acquires region information and annotation information. The information processing apparatus according to this embodiment has a functional configuration similar to that shown in
XMP1 and XMP2 in the playlist 1300 can be acquired by accessing the URLs described in the playlist 1300, and region information defining a partial region in the main image and annotation information associated with the region are described. The basic format of XMP is XML (Extensible Markup Language). It is preferable to describe information for acquiring a schema for interpreting the description.
The generating unit 106 may store information for performing image analysis instead of directly storing region information and annotation information. That is, the generating unit 106 may store, for example, a URI of an image analysis service, information for identifying a function used in the service, or a parameter handed to API provided by the image analysis service as information necessary to acquire region information and annotation information by image analysis. Such processing makes it possible to store information for acquiring region information and annotation information which can be generated and provided by image analysis processing without directly storing the region information and the annotation information in the playlist. In this case, the generating unit 106 may store information indicating an image analysis unit or type or algorithm. For example, the generating unit 106 can store information for identifying image analysis to be executed, such as context analysis, for example, suspicious behavior analysis in a monitoring camera, or object analysis for identifying an animal, human, vehicle, or the like. It is possible to arbitrarily use, as an object to be analyzed, an object that can be identified by general analysis processing, such as a human face or pupil, human, animal, motorcycle, number plate, or lesion portion (in medical image diagnosis or the like). In addition, there is no need to store information for performing image analysis on both region information and annotation information, and region information or annotation information may be directly stored for one of the two pieces of information.
In the above examples, the processing of generating a playlist basically for a still image has been described. However, the generating unit 106 may generate a playlist including region information and annotation information for a main image as a moving image. A case in which a main image is a moving image will be described below with reference to
In a playlist 1400, one main image which is a moving image and two types of information, namely, region information and annotation information, concerning the main image are defined. In this case, region information and annotation information are timed meta data having information according to time series and can be acquired as MP4 files like the mina image (moving image). Although the format of timed meta data may be an XMP/XML file as in the case in which a main image is a still image, it is preferable that there is data temporarily synchronized with the frame of the main image. In addition, when the position of a partial region is fixed even in a case in which a main image is a moving image, descriptions of region information and annotation information may be described as in the first embodiment.
Note that in MPEG-DASH and a streaming technique similar thereto, different pieces of region information can be provided for each period as the time length of each segment. Accordingly, region information may be set and updated for each period.
The first and second embodiments each have mainly exemplified the processing by the information processing apparatus. The third embodiment exemplifies processing concerning playlist analysis and playback which is performed by a receiving apparatus 110 which has received the playlist output from an information processing apparatus 100.
In step S1501, the receiving apparatus 110 acquires a playlist from the information processing apparatus 100. In step S1502, the receiving apparatus 110 determines, based on the description of the playlist, whether there is annotation information in a medium to be played back. In the example shown in
In step S1503, the receiving apparatus 110 determines whether any partial region is associated with the annotation information. In this case, the receiving apparatus 110 determines whether region information is provided as the attribute of the annotation information. The region information is described as being defined by a schema like “urn:mpeg:dash:rgon:2021” as in the first embodiment. If a partial region is associated with the annotation information, the process advances to step S1504; otherwise, the processing is terminated.
In step S1504, the receiving apparatus 110 defines a partial region on the main image based on the playlist. In this case, the receiving apparatus 110 acquires the size of a medium (main image) and region information which are played back based on the description of the playlist and specifies the shape and position of the partial region.
In step S1505, the receiving apparatus 110 acquires the encoded data of a medium to be played back based on the network address described in the playlist and plays back and displays the data. In step S1506, the receiving apparatus 110 superimposes and displays a frame surrounding the partial region on the display screen displayed in step S1505. In step S1507, the receiving apparatus 110 acquires annotation information and displays the information on the display screen in association with the frame displayed in step S1506.
This processing makes it possible to acquire a video to be played back based on the information of the playlist and annotation information to be displayed in association with a partial region of the video and play back the video and the information.
Although the environments have been described in detail, the present disclosure can take embodiments as a system, apparatus, method, program, recording medium (storage medium), and the like. More specifically, the present disclosure can be applied to a system including a plurality of devices (for example, a host computer, an interface device, an imaging device, and a web application) or to an apparatus including a single device.
The present disclosure can also be achieved by directly or remotely supplying programs of software for implementing the functions of the above embodiments to a system or apparatus and causing the computer of the system or apparatus to read out and execute the programs. In this case, the programs are computer-readable programs corresponding to the flowcharts shown in the accompanying drawings in the embodiments.
Accordingly, the program codes themselves which are installed in the computer to allow the computer to implement the functions/processing of the present disclosure also implement the present disclosure. That is, the present disclosure incorporates the computer programs themselves for implementing the functions/processing of the present disclosure.
In this case, each program may take any form, for example, an object code, a program executed by an interpreter, and script data supplied to an OS, as long as it has the function of the program.
Examples of the recording medium for supplying the programs includes a Floppy® disk, a hard disk, an optical disk, a magnetooptical disk, an MO, a CD-ROM, a CD-R, a CD-RW, a magnetic tape, a nonvolatile memory card, a ROM, and a DVD (DVD-ROM or DVD-R).
Methods of supplying the programs include the following. A client computer connects to a homepage on the Internet by using a browser to download each computer program itself (or a compressed file including an automatic install function) of the present disclosure from the homepage into a recording medium such as a hard disk. Alternatively, the programs can be supplied by dividing the program codes constituting each program of the present disclosure into a plurality of files, and downloading the respective files from different homepages. That is, the present disclosure also incorporates a WWW server which allows a plurality of users to download program files for causing the computer to implement the functions/processing of the present disclosure.
In addition, the programs of the present disclosure can be encrypted and stored in storage media such as CD-ROMs and be distributed to users. In this case, users who satisfy a predetermined condition are allowed to download key information for decryption from a homepage through the Internet. That is, the users can execute the encrypted programs by using the key information and make the computers install the programs.
The functions of the above embodiments are implemented by making the computer execute the readout programs. In addition, the functions of the above embodiments can also be implemented by making the OS and the like running on the computer execute part or all of actual processing based on the instructions of the programs.
The functions of the above embodiments are also implemented by writing the programs read out from the recording medium in the memory of a function expansion board inserted into the computer or a function expansion unit connected to the computer. That is, the CPU or the like of the function expansion board or function expansion unit can execute part or all of actual processing based on the instructions of the programs.
Referring to
Embodiment(s) of the present disclosure can also be realized by a computer of a system or apparatus that reads out and executes computer executable instructions (e.g., one or more programs) recorded on a storage medium (which may also be referred to more fully as a ‘non-transitory computer-readable storage medium’) to perform the functions of one or more of the above-described embodiment(s) and/or that includes one or more circuits (e.g., application specific integrated circuit (ASIC)) for performing the functions of one or more of the above-described embodiment(s), and by a method performed by the computer of the system or apparatus by, for example, reading out and executing the computer executable instructions from the storage medium to perform the functions of one or more of the above-described embodiment(s) and/or controlling the one or more circuits to perform the functions of one or more of the above-described embodiment(s). The computer may comprise one or more processors (e.g., central processing unit (CPU), micro processing unit (MPU)) and may include a network of separate computers or separate processors to read out and execute the computer executable instructions. The computer executable instructions may be provided to the computer, for example, from a network or the storage medium. The storage medium may include, for example, one or more of a hard disk, a random-access memory (RAM), a read only memory (ROM), a storage of distributed computing systems, an optical disk (such as a compact disc (CD), digital versatile disc (DVD), or Blu-ray Disc (BD)™), a flash memory device, a memory card, and the like.
While the present disclosure has been described with reference to exemplary embodiments, it is to be understood that the disclosure is not limited to the disclosed exemplary embodiments. The scope of the following claims is to be accorded the broadest interpretation so as to encompass all such modifications and equivalent structures and functions.
This application claims the benefit of Japanese Patent Application No. 2021-165649 filed Oct. 7, 2021, which is hereby incorporated by reference herein in its entirety.
Number | Date | Country | Kind |
---|---|---|---|
2021-165649 | Oct 2021 | JP | national |