The present disclosure relates to a technique for generating a virtual viewpoint image using a three-dimensional model.
Techniques for generating a virtual viewpoint image from a specified virtual viewpoint by using a plurality of images captured by a plurality of imaging apparatuses are attracting attention.
Japanese Patent Application Laid-Open No. 2020-144748 discusses a technique for predicting the motion of a three-dimensional model and determining the position of a virtual viewpoint.
There is a demand for generating a three-dimensional model immediately after capturing images, generating a virtual viewpoint image using the generated three-dimensional model, and distributing the virtual viewpoint image almost in real time. However, because processing for generating the three-dimensional model based on the captured images takes time, a time lag occurs between the time when the images are captured by actual cameras and the time when the virtual viewpoint image is generated. Thus, in a case where a user (an operator) performs a virtual viewpoint operation with reference to the virtual viewpoint image, the user also needs to consider a time lag to the time when the operation is reflected in the virtual viewpoint image. This makes it hard for the user to perform a virtual viewpoint operation suitable for an event.
The present disclosure is directed to preventing a careless virtual viewpoint operation at the time of occurrence of an event.
According to an aspect of the present disclosure, an information processing apparatus includes one or more memories storing instructions, and one or more processors executing the instructions to acquire event information, and determine a position of a virtual viewpoint and a line-of-sight direction from the virtual viewpoint based on the event information before a three-dimensional model of a subject is generated.
Further features of the present disclosure will become apparent from the following description of exemplary embodiments with reference to the attached drawings.
Exemplary embodiments of the present disclosure will be described below with reference to the accompanying drawings. The present disclosure is not limited to the following exemplary embodiments. In the drawings, identical members or elements are assigned the same reference numerals, and duplicated descriptions thereof will be omitted or simplified.
An information processing system according to a first exemplary embodiment generates a virtual viewpoint image viewed from a specified virtual viewpoint based on images captured from different directions by a plurality of imaging apparatuses (cameras), statuses of the imaging apparatuses, and the virtual viewpoint. A virtual viewpoint image according to the present exemplary embodiment is also referred to as a free viewpoint video image, but is not limited to an image corresponding to a viewpoint freely (optionally) specified by a user. Examples of the virtual viewpoint image include an image corresponding to a viewpoint selected from among a plurality of candidates by the user. The present exemplary embodiment will be described below focusing on a case where a virtual viewpoint is specified through a user operation. Alternatively, a virtual viewpoint can be automatically specified based on, for example, an image analysis result. The present exemplary embodiment will also be described below focusing on a case where a virtual viewpoint image is a moving image. Alternatively, a virtual viewpoint image can be a still image.
Viewpoint information used to generate a virtual viewpoint image indicates the position and orientation (line-of-sight direction) of a virtual viewpoint. More specifically, the viewpoint information is a parameter set including a parameter representing the three-dimensional position of a virtual viewpoint, and a parameter representing the orientation of the virtual viewpoint in pan, tilt, and roll directions. The contents of the viewpoint information are not limited thereto. For example, the parameter set as the viewpoint information can include a parameter representing the size of the visual field (the angle of view) of the virtual viewpoint. The viewpoint information can also include a plurality of parameter sets. For example, the viewpoint information can include a plurality of parameter sets respectively corresponding to a plurality of frames included in the moving image as the virtual viewpoint image, and can be information indicating the position and orientation of the virtual viewpoint at each of a plurality of continuous points of time.
The image processing system includes a plurality of imaging apparatuses that captures images of an imaging region from a plurality of directions. The imaging region is, for example, a sports stadium where sports such as soccer and karate are performed or a stage where concerts and dramas are performed. The plurality of imaging apparatuses is installed at different positions surrounding such an imaging region, and captures images in synchronization with each other. The plurality of imaging apparatuses may not necessarily be installed over the entire circumference of the imaging region and can be installed at a part of the circumference of the imaging region due to reasons such as limitations on installation locations. Alternatively, the plurality of imaging apparatuses having different functions, such as telephotographic cameras and wide-angle cameras, can be installed.
Each of the plurality of imaging apparatuses according to the present exemplary embodiment has an independent housing and is a camera capable of capturing images with a single viewpoint. However, the present disclosure is not limited thereto. Two or more imaging apparatuses can be included in the same housing. For example, a single camera including a plurality of lens groups and a plurality of sensors and capable of capturing images from a plurality of viewpoints can be installed as the plurality of imaging apparatuses.
A virtual viewpoint image is generated, for example, by the following method. First, the plurality of imaging apparatuses captures images from different directions to acquire a plurality of captured images (a plurality of viewpoint images). Then, a foreground image is obtained by extracting a foreground region corresponding to predetermined objects such as a person and a ball from the plurality of viewpoint images, and a background image is obtained by extracting a background region other than the foreground region from the plurality of viewpoint images. A foreground model representing the three-dimensional shape of a predetermined object and texture data for coloring the foreground model are generated based on the foreground image. Texture data for coloring a background model representing the three-dimensional shape of a background such as a sports stadium is generated based on the background image. The foreground model and the background model are subjected to texture data mapping and rendering based on the virtual viewpoint indicated by the viewpoint information to generate a virtual viewpoint image. The method for generating a virtual viewpoint image is not limited thereto. Various examples of the method include generating a virtual viewpoint image by projection conversion of captured images without three-dimensional models.
A virtual camera, which is different from the plurality of imaging apparatuses actually installed around the imaging region, is a concept for conveniently explaining a virtual viewpoint related to the generation of a virtual viewpoint image. More specifically, a virtual viewpoint image can be considered as an image captured from a virtual viewpoint set in a virtual space related to the imaging region. The position and orientation of the viewpoint in such virtual image capturing can be represented as the position and orientation of the virtual camera. In other words, assuming that a camera exists at the position of a virtual viewpoint set in a space, a virtual viewpoint image refers to an image that simulates a captured image acquired by the camera. In the present exemplary embodiment, the transition of a virtual viewpoint over time is referred to as a virtual camera path. However, it is not essential to use the concept of the virtual camera in order to implement the configuration according to the present exemplary embodiment. It is sufficient to set at least information representing specific position and orientation in a space and generate a virtual viewpoint image based on the set information.
A camera group 101 includes a plurality of cameras arranged at different positions in, for example, a stadium where basketball is performed, and the plurality of cameras captures images from a plurality of viewpoints in synchronization with each other. Data of the plurality of viewpoint images acquired in the synchronous image capturing is transmitted to a three-dimensional model generation apparatus 102 and an event detection apparatus 104.
The three-dimensional model generation apparatus 102 receives the plurality of viewpoint images from the camera group 101 and generates a three-dimensional model. For example, Visual Hull (Shape from Silhouette) is used to generate a three-dimensional model. As a result of this processing, a three-dimensional (3D) point group (a set of points having three-dimensional coordinates) representing the three-dimensional shape of a subject is obtained. The method for deriving the three-dimensional shape of a subject based on the captured images is not limited thereto.
A three-dimensional model storage apparatus 103 stores the three-dimensional model generated by the three-dimensional model generation apparatus 102, in association with time information. Based on time information received from a virtual viewpoint image generation apparatus 106, the three-dimensional model storage apparatus 103 also transmits the three-dimensional model associated with the time information to the virtual viewpoint image generation apparatus 106.
The event detection apparatus 104 detects an event corresponding to a time and a subject based on the plurality of viewpoint images received from the camera group 101. In the present exemplary embodiment, an event is caused by an action of a subject or a phenomenon on a subject. For example, the event detection apparatus 104 detects an event caused by a subject, such as traveling in a basketball game. While in the present exemplary embodiment, the event detection apparatus 104 is described to detect an event based on a result of image processing on the images captured by the camera group 101, the trigger for detecting an event is not limited to the input from the camera group 101. For example, the event detection apparatus 104 can detect an event based on a signal obtained from a sensor such as a goal sensor or a starter pistol sensor in track competitions, or a sword tip sensor in fencing. Alternatively, the event detection apparatus 104 can detect an event based on a result of analyzing sound information acquired through a microphone. The event detection apparatus 104 can also detect an event by separately preparing a learning model that inputs the captured images and outputs an event. In the present exemplary embodiment, the event detection apparatus 104 acquires position information about a subject that has caused the detection of an event, by subjecting the captured images to stereo matching, and includes the position information in event information. The method for acquiring the position information about the subject is not limited thereto. The position information about the subject can be acquired by extracting feature points from the captured images. The event detection apparatus 104 transmits the event information about the detected event to an event information acquisition unit 111.
A virtual camera control apparatus 110 includes the event information acquisition unit 111, an event information storage unit 112, a virtual camera path generation unit 113, a virtual camera path transmission unit 114, and a generation time management unit 115.
An event occurrence time 112-1 represents the time when each event has occurred. In the present exemplary embodiment, the storage format is “month, day, year, hour: minute: second: frame” with a frame rate of 60 frames per second (fps). This means that each frame takes a value from 0 to 59.
An event occurrence position 112-2 represents the position where the event has occurred. In the present exemplary embodiment, the storage format is “x-coordinate, y-coordinate, z-coordinate” expressed in meters (m).
A subject 112-3 represents the subject that has caused the event.
A subject position 112-4 represents the position information about the subject that has caused the event. In the present exemplary embodiment, the center-of-gravity position of the subject is the position information about the subject. The position information about the subject is not limited thereto but can indicate the position of a part of the subject, such as the head or right hand of the subject.
An event type 112-5 represents what type of event has occurred. In the present exemplary embodiment, the event type 112-5 describes event types in a basketball game. For example, the event type “third step while holding ball” represents a traveling foul. In the present exemplary embodiment, the event types are assumed to be defined in advance.
The event information acquisition unit 111 acquires the event information about the event detected by the event detection apparatus 104, and registers the acquired event information in the event information storage unit 112. As described above with reference to
The virtual camera path generation unit 113 generates a virtual camera path based on virtual camera operation information acquired from a controller 105 and a virtual viewpoint image reproduction time acquired from the generation time management unit 115. Alternatively, the virtual camera path generation unit 113 can generate a virtual camera path based on the event information acquired from the event information acquisition unit 111. The virtual camera path generation unit 113 transmits the generated virtual camera path to the virtual camera path transmission unit 114. In real-time distribution, a time lag occurs between the time when the cameras capture the images and the time when a virtual viewpoint image is displayed. The time lag corresponds to the processing time from the time when the captured images are acquired to the time when a three-dimensional model is generated and then a virtual viewpoint image is generated. Thus, a user who specifies a virtual viewpoint using the controller 105 is to perform an operation considering the time lag, which makes it hard for the user to perform a virtual viewpoint operation suitable for a sudden event. In the present exemplary embodiment, the operation information from the controller 105 is ignored if a virtual camera path generated based on event information exists before a three-dimensional model is generated based on captured images. More specifically, if a sudden event occurs, a virtual camera path is generated before a three-dimensional model is generated, whereby a virtual viewpoint suitable for the event can be generated. In the present exemplary embodiment, however, the operation information from the controller 105 may not necessarily be ignored. For example, a virtual camera path generated based on event information can be corrected based on the operation information from the controller 105. Alternatively, a switch for determining which of the pieces of information is to be given priority when generating a virtual camera path can be provided. This switch can be implemented by hardware or implemented on a software user interface (UI). Further alternatively, a virtual camera path can be generated based on event information and the position of the virtual camera before acquisition of the event information.
A time 113-1 represents the time when the virtual camera is generated. In the present exemplary embodiment, the storage format is “month, day, year, hour: minute: second: frame” with a frame rate of 60 fps. This means that each frame takes a value from 0 to 59.
A position 113-2 represents the position of the virtual camera. In the present exemplary embodiment, the storage format is “x-coordinate, y-coordinate, z-coordinate” expressed in meters (m).
An orientation 113-3 represents the orientation of the virtual camera. In the present exemplary embodiment, the storage format is “pan angle, tilt angle” expressed in degrees. The pan angle takes a value from 0 to 360 degrees with respect to a certain direction defined as 0 degrees. The tilt angle takes a value from −180 to 180 degrees with respect to the horizontal direction defied as 0 degrees. The orientation from the horizontal direction upward is represented by a positive value, and the orientation from the horizontal direction downward is represented by a negative value.
A zoom magnification 113-4 represents the focal length of the virtual camera expressed in millimeters (mm). More specifically, a smaller value indicates a wider angle, and a larger value indicates a more telephoto side.
As described above, the virtual camera path is defined so as to associate the values of the time 113-1, the position 113-2, the orientation 113-3, and the zoom magnification 113-4 with each other.
The generation time management unit 115 manages the time when the virtual viewpoint image generation apparatus 106 can generate a virtual viewpoint image. In the present exemplary embodiment, the format of the time when a virtual viewpoint image can be generated is “month, day, year, hour: minute: second: frame” with a frame rate of 60 fps. This means that each frame takes a value from 0 to 59 and the value is incremented by one at intervals of 1/60 seconds. In the present exemplary embodiment, the time when a virtual viewpoint image can be generated is delayed from the current time. The delay time duration is longer than the time duration taken for the three-dimensional model generation apparatus 102 to generate a three-dimensional model. While in the present exemplary embodiment, the user can optionally set how much the time when a virtual viewpoint image can be generated is to be delayed from the current time, the present exemplary embodiment is not limited thereto. For example, the generation time management unit 115 can obtain the maximum time duration taken for the three-dimensional model generation apparatus 102 to generate a three-dimensional model, and automatically determine the time when a virtual viewpoint image can be generated, based on the maximum time duration.
The virtual camera path transmission unit 114 transmits, to the virtual viewpoint image generation apparatus 106, the virtual camera path transmitted from the virtual camera path generation unit 113. In the present exemplary embodiment, the virtual camera path transmission unit 114 transmits the virtual camera path at 60 fps intervals.
The virtual viewpoint image generation apparatus 106 generates a virtual viewpoint image based on the virtual camera path acquired from the virtual camera path transmission unit 114. The virtual viewpoint image generation apparatus 106 transmits the time 113-1 of the acquired virtual camera path to the three-dimensional model storage apparatus 103 to acquire the three-dimensional model corresponding to the time 113-1. For the acquired three-dimensional model, the virtual viewpoint image generation apparatus 106 generates a video image captured by the virtual camera virtually generated based on the values of the position 113-2, the orientation 113-3, and the zoom magnification 113-4 of the acquired virtual camera path, as a virtual viewpoint image. The virtual viewpoint image generation apparatus 106 transmits the generated virtual viewpoint image to a display 107.
The display 107 outputs the virtual viewpoint image acquired from the virtual viewpoint image generation apparatus 106. In the present exemplary embodiment, the operator is assumed to operate the virtual camera by operating the controller 105 while watching the virtual viewpoint image output to the display 107.
The information processing apparatus 200 includes a central processing unit (CPU) 201, a read only memory (ROM) 202, a random access memory (RAM) 203, an auxiliary storage device 204, a display unit 205, an operation unit 206, a communication I/F 207, and a system bus 208.
The CPU 201 controls the entire information processing apparatus 200 using computer programs and data stored in the ROM 202 or the RAM 203 to implement the corresponding function of the system illustrated in
The ROM 202 stores programs not to be changed. The RAM 203 temporarily stores programs and data supplied from the auxiliary storage device 204, and data supplied from the outside via the communication I/F 207. The auxiliary storage device 204, such as a hard disk drive, stores image data, sound data, and other types of data.
The display unit 205 including, for example, a liquid crystal display or a light emitting diode (LED) displays a graphical user interface (GUI) used by the user to issue instructions to the information processing apparatus 200.
The operation unit 206 including, for example, a keyboard, a mouse, a joystick, or a touch panel receives operations from the user and inputs various instructions to the CPU 201. The CPU 201 operates as a display control unit for controlling the display unit 205 and an operation control unit for controlling the operation unit 206.
The communication I/F 207 is used to communicate with apparatuses outside the information processing apparatus 200, such as the camera group 101 and a microphone group. If the information processing apparatus 200 has a function of wirelessly communicating with an external apparatus, the communication I/F 207 includes an antenna.
The system bus 208 connects the components of the information processing apparatus 200 to transfer information therebetween.
In the present exemplary embodiment, the display unit 205 and the operation unit 206 are included inside the information processing apparatus 200. Alternatively, at least one of the display unit 205 and the operation unit 206 can exist as a separate apparatus outside the information processing apparatus 200.
In step S501, the event information acquisition unit 111 acquires event information from the event detection apparatus 104. The event information is a data group including the event occurrence time 112-1, the event occurrence position 112-2, the subject 112-3, the subject position 112-4, and the event type 112-5. The event occurrence time 112-1 included in the event information acquired in this step is assumed to be earlier than the current time and later than the time when a virtual viewpoint image can be generated. In other words, the event occurrence time 112-1 is the time after the time when a virtual viewpoint image can be generated and before the current time, and is the time when a virtual viewpoint is not specified by the user.
In step S502, the event information acquisition unit 111 registers the event information acquired in step S501 in the event information storage unit 112. In the registration, for example, the event information acquisition unit 111 adds data to the table representing the event information in
In step S503, the event information acquisition unit 111 determines whether to generate a virtual camera path based on the event information acquired in step S501. While in the present exemplary embodiment, the determination method is not specifically defined, for example, a rule that a virtual camera path is determined to be generated if the event type 112-5 of the event information is “third step while holding ball” is set in advance, and the event information acquisition unit 111 can make the determination according to the rule.
If the event information acquisition unit 111 determines not to generate a virtual camera path based on the acquired event information (NO in step S503), the processing ends. If the event information acquisition unit 111 determines to generate a virtual camera path based on the acquired event information (YES in step S503), the processing proceeds to step S504. In step S504, the event information acquisition unit 111 acquires the entire event information to be used to generate a virtual camera path from the event information storage unit 112, based on the event information acquired in step S501. For example, if the event type 112-5 of the event information acquired in step S501 is “third step while holding ball”, the event information acquisition unit 111 acquires the event information corresponding to “first step while holding ball” and “second step while holding ball” pre-registered for the same subject in the event information storage unit 112.
In step S505, the virtual camera path generation unit 113 generates a virtual camera path based on one or more pieces of event information acquired in step S504. At this time, the virtual camera path generation unit 113 generates a virtual camera path in the format represented by the table illustrated in
In the present exemplary embodiment, the virtual camera control apparatus 110 repeats steps S601 to S609 at intervals of 1/60 seconds. This repetition interval is due to the generation of a virtual camera path at a frame rate of 60 fps. This means that, when generating a virtual camera path at a frame rate of 30 fps, the virtual camera control apparatus 110 repeats steps S601 to S609 at intervals of 1/30 seconds. The repetition interval can be optionally set by the user.
In step S602, the virtual camera path generation unit 113 acquires the time when a virtual viewpoint image can be generated that is managed by the generation time management unit 115.
In step S603, the virtual camera path generation unit 113 determines whether a virtual camera path based on event information has already been generated at the time acquired in step S602. More specifically, the virtual camera path generation unit 113 determines whether data of the virtual camera path corresponding to the time acquired in step S602 is generated in step S505 in
In step S604, the virtual camera path generation unit 113 acquires the operation information from the controller 105. This means that the result of the determination in step S603 indicates that a virtual camera path based on event information does not exist at the time acquired in step S602 and thus a virtual camera path is to be generated based on the operation information from the controller 105.
In step S605, the virtual camera path generation unit 113 generates a virtual camera path based on the operation information acquired in step S604 and the time information acquired in step S602.
In step S606, the virtual camera path transmission unit 114 transmits the virtual camera path generated in step S605 to the virtual viewpoint image generation apparatus 106.
In step S607, the virtual camera path transmission unit 114 transmits the data corresponding to the time acquired in step S602 in the virtual camera path generated in step S505 to the virtual viewpoint image generation apparatus 106.
In step S608, the generation time management unit 115 increments the time when a virtual viewpoint image can be generated, for one frame.
As described above, in the present exemplary embodiment, the virtual camera control apparatus 110 transmits a virtual camera path based on the operation information from the controller 105. The virtual camera control apparatus 110 also transmits a virtual camera path generated based on event information to the virtual viewpoint image generation apparatus 106 upon acquisition of the event information. As described above in step S501, it is assumed that, upon acquisition of event information, the event occurrence time is earlier than the current time and later than the time when a virtual viewpoint image can be generated. This enables generating a virtual camera path before generation of a three-dimensional model, thereby preventing a careless virtual viewpoint operation at the time of occurrence of an event.
The present exemplary embodiment makes it possible to prevent a careless virtual viewpoint operation at the time of occurrence of an event.
In the first exemplary embodiment, the method has been described in which the system that acquires event information about an event that occurs later than the time when a virtual viewpoint image can be generated generates a virtual camera path based on the acquired event information. However, there may be a case where the event detection apparatus 104 is unable to constantly detect an event at the same time as the current time. In such a case, at the point in time of acquisition of the event information by the event information acquisition unit 111, the event occurrence time may be earlier than the time when a virtual viewpoint image can be generated. To address the issue, in a second exemplary embodiment, a configuration will be described in which the time when a virtual viewpoint image can be generated and the event occurrence time are compared with each other and, only if the event occurrence time is later than the time when a virtual viewpoint image can be generated, a virtual camera path based on event information is generated.
The generation flag management unit 701 manages a virtual camera path generation flag. When the virtual camera path generation unit 113 generates a virtual camera path based on event information, the flag is changed to TRUE. The virtual camera path generation flag is used to determine whether a virtual camera path has been generated based on event information. If the flag is TRUE, the virtual camera path generation unit 113 compares the time when a virtual viewpoint image can be generated and the time of the generated virtual camera path with each other. If the time of the generated virtual camera path is later than the time when a virtual viewpoint image can be generated, the virtual camera path generation unit 113 stores the generated virtual camera path. If the time of the generated virtual camera path is earlier than the time when a virtual viewpoint image can be generated, the virtual camera path generation unit 113 deletes the virtual camera path. After deleting the generated virtual camera path, the virtual camera path generation unit 113 generates a virtual camera path based on the operation information from the controller 105 as usual, and transmits the generated virtual camera path to the virtual viewpoint image generation apparatus 106.
In step S801, the virtual camera control apparatus 110 stores the start time of the virtual camera path generated in step S505, i.e., the earliest time of the generated virtual camera path.
In step S802, the virtual camera control apparatus 110 changes the virtual camera path generation flag managed by the generation flag management unit 701 to TRUE.
In the present exemplary embodiment, the virtual camera control apparatus 110 repeats steps S601 to S609 at intervals of 1/60 seconds. This repetition interval is due to the generation of a virtual camera path at a frame rate of 60 fps. Steps other than steps S901 to S904 are similar to the steps in
In step S901, the virtual camera path generation unit 113 determines whether the virtual camera path generation flag managed by the generation flag management unit 701 is TRUE. If the flag is FALSE (NO in step S901), the processing proceeds to step S903. If the flag is TRUE, the virtual camera control apparatus 110 further compares the time when a virtual viewpoint image can be generated and the start time of the virtual camera path determined to have been generated in step S603. If the virtual camera path start time is earlier than the time when a virtual viewpoint image can be generated as a result of the comparison (YES in step S901), the processing proceeds to step S902. If the virtual camera path start time is later than the time when a virtual viewpoint image can be generated (NO in step S901), the processing proceeds to step S903.
In step S902, the virtual camera path generation unit 113 deletes the virtual camera path determined to have been generated in step S603 from the virtual camera path generation unit 113. In other words, if a virtual camera path is generated based on event information but the start time is earlier than the time when a virtual viewpoint image can be generated, the generated virtual camera path is to be discarded. After step S902, the processing proceeds to step S604. In step S604, the virtual camera control apparatus 110 performs the processing for generating a virtual camera path based on the operation information from the controller 105 as usual.
In step S903, the virtual camera path transmission unit 114 transmits the virtual camera path determined to have been generated in step S603 to the virtual viewpoint image generation apparatus 106.
In step S904, the generation flag management unit 701 changes the virtual camera path generation flag managed by the generation flag management unit 701 to FALSE.
As described above, according to the present exemplary embodiment, the virtual camera path generation unit 113 determines whether, in a case where a virtual camera path is generated based on event information, the start time is later than the time when a virtual viewpoint image can be generated, to determine whether to apply the generated virtual camera path.
A computer program for implementing part or whole of control according to the above-described exemplary embodiments, i.e., the functions according to the above-described exemplary embodiments can be supplied to an image processing system via a network or various types of storage media, and a computer (or a CPU or a micro processing unit (MPU)) of the image processing system can read and execute the program. In this case, the program and a storage medium storing the program are included in the exemplary embodiments of the present disclosure.
The exemplary embodiments of the present disclosure include the following configurations, method, and storage medium:
Embodiment(s) of the present disclosure can also be realized by a computer of a system or apparatus that reads out and executes computer executable instructions (e.g., one or more programs) recorded on a storage medium (which may also be referred to more fully as a ‘non-transitory computer-readable storage medium’) to perform the functions of one or more of the above-described embodiment(s) and/or that includes one or more circuits (e.g., application specific integrated circuit (ASIC)) for performing the functions of one or more of the above-described embodiment(s), and by a method performed by the computer of the system or apparatus by, for example, reading out and executing the computer executable instructions from the storage medium to perform the functions of one or more of the above-described embodiment (s) and/or controlling the one or more circuits to perform the functions of one or more of the above-described embodiment (s). The computer may comprise one or more processors (e.g., central processing unit (CPU), micro processing unit (MPU)) and may include a network of separate computers or separate processors to read out and execute the computer executable instructions. The computer executable instructions may be provided to the computer, for example, from a network or the storage medium. The storage medium may include, for example, one or more of a hard disk, a random-access memory (RAM), a read only memory (ROM), a storage of distributed computing systems, an optical disk (such as a compact disc (CD), digital versatile disc (DVD), or Blu-ray Disc (BD)™), a flash memory device, a memory card, and the like.
While the present disclosure has been described with reference to exemplary embodiments, it is to be understood that the disclosure is not limited to the disclosed exemplary embodiments. The scope of the following claims is to be accorded the broadest interpretation so as to encompass all such modifications and equivalent structures and functions.
This application claims the benefit of Japanese Patent Application No. 2022-096458, filed Jun. 15, 2022, which is hereby incorporated by reference herein in its entirety.
Number | Date | Country | Kind |
---|---|---|---|
2022-096458 | Jun 2022 | JP | national |