This application is based upon and claims the benefit of priority from Japanese patent application No. 2018-125200, filed on Jun. 29, 2018, the disclosure of which is incorporated herein in its entirety by reference.
The present disclosure relates to a viewpoint recommendation apparatus for estimating a viewpoint of an object, a recommendation method thereof, and a program.
An object pose estimation apparatus that observes an object at a plurality of viewpoints is known (see, for example, Japanese Unexamined Patent Application Publication No. 2012-022411).
Since the object pose estimation apparatus observes the object at a plurality of randomly set viewpoints, these viewpoints are not necessarily the optimum positions for observing the object.
The present disclosure has been made to solve such a problem. A main object of the present disclosure is to provide a viewpoint recommendation apparatus that can estimate an optimum viewpoint of an object, a recommendation method thereof, and a program.
An example aspect of the present disclosure to achieve the above objective is a viewpoint recommendation apparatus for estimating, from a first viewpoint of an object, a second viewpoint at which the object is to be observed next in order to estimate a pose of the object. The viewpoint recommendation apparatus includes: image acquisition means for acquiring an image of the object at the first viewpoint; image feature extraction means for extracting an image feature of the image of the object from the image of the object acquired by the image acquisition means; pose estimation means for calculating a first likelihood map indicating a relation between the estimated pose of the object estimated from the image of the object and a likelihood of this estimated pose based on the image feature of the object extracted by the image feature extraction means; second storage means for storing a second likelihood map indicating a relation between the true first viewpoint in the estimated pose and a likelihood of this first viewpoint; third storage means for storing a third likelihood map indicating a relation between the pose of the object when the object is observed at the first viewpoint and the second viewpoint and a likelihood of this pose; and viewpoint estimation means for estimating the second viewpoint so that a value of an evaluation function becomes the maximum or minimum, the evaluation function using, as a parameter, a result of multiplying and integrating the first likelihood map calculated by the pose estimation means, the second likelihood map stored in the second storage means, and the third likelihood map stored in the third storage means.
In this example aspect, the viewpoint estimation means may estimate the second viewpoint δ2 (hat) using the following formula based on the first likelihood map p(ξ|I1) estimated by the pose estimation means, the second likelihood map p(φ1|ξ) stored in the second storage means, and the third likelihood map p(θ|δ2, φ1) stored in the third storage means. In this formula, ξ is the estimated pose of the object, I1 is the image of the object acquired by the image acquisition means at the first viewpoint, φ1 is the first viewpoint, θ is the pose of the object, and δ2 is the rotation degree to the second viewpoint.
In this example aspect, first learning means for learning the image feature of each pose of the object is further included. The pose estimation means may compare the image feature of the object extracted by the image feature extraction means with the image feature of each pose of the object learned by the first learning means to calculate the first likelihood map.
In this example aspect, the second storage means may be second learning means which learns a relation between the true first viewpoint in the estimated pose and the likelihood of the true first viewpoint and store it as the second likelihood map.
In this example aspect, the third storage means may be third learning means which learns a relation between the pose of the object when the object is observed at the first viewpoint and the second viewpoint and a likelihood of this pose and store it as the third likelihood map.
In this example aspect, the evaluation function may be a function for calculating a variance of a distribution or an entropy of the distribution.
In this example aspect, the viewpoint estimation means may estimate at least one of the second viewpoint so that the value of the evaluation function becomes greater than or equal to a threshold or less than or equal to the threshold.
Another example aspect of the present disclosure to achieve the above object may be a recommendation method performed by a viewpoint recommendation apparatus for estimating, from a first viewpoint of an object, a second viewpoint at which the object is to be observed next in order to estimate a pose of the object. The recommendation method includes: acquiring an image of the object at the first viewpoint; extracting an image feature of the image of the object from the acquired image of the object; calculating a first likelihood map indicating a relation between the estimated pose of the object estimated from the image of the object and a likelihood of this estimated pose based on the extracted image feature of the object; estimating the second viewpoint so that a value of an evaluation function becomes the maximum or minimum, the evaluation function using, as a parameter, a result of multiplying and integrating the calculated first likelihood map, a second likelihood map indicating a relation between the true first viewpoint in the estimated pose and a likelihood of this first viewpoint, and a third likelihood map indicating a relation between the pose of the object when the object is observed at the first viewpoint and the second viewpoint and a likelihood of this pose.
Another example aspect of the present disclosure to achieve the above object may be a program of a viewpoint recommendation apparatus for estimating, from a first viewpoint of an object, a second viewpoint at which the object is to be observed next in order to estimate a pose of the object. The program causes a computer to execute: a process of acquiring an image of the object at the first viewpoint; a process of extracting an image feature of the image of the object from the acquired image of the object; a process of calculating a first likelihood map indicating a relation between the estimated pose of the object estimated from the image of the object and a likelihood of this estimated pose based on the extracted image feature of the object; and a process of estimating the second viewpoint so that a value of an evaluation function becomes the maximum or minimum, the evaluation function using, as a parameter, a result of multiplying and integrating the calculated first likelihood map, a second likelihood map indicating a relation between the true first viewpoint in the estimated pose and a likelihood of this first viewpoint, and a third likelihood map indicating a relation between the pose of the object when the object is observed at the first viewpoint and the second viewpoint and a likelihood of this pose.
The present disclosure has been made to solve such a problem and can provide a viewpoint recommendation apparatus that can estimate an optimum viewpoint of an object, a recommendation method thereof, and a program.
The above and other objects, features and advantages of the present disclosure will become more fully understood from the detailed description given hereinbelow and the accompanying drawings which are given by way of illustration only, and thus are not to be considered as limiting the present disclosure.
Hereinafter, an embodiment of the present disclosure is explained with reference to the drawings.
It is important to estimate a pose of an unknown object, for example, in order for a life support robot to hold the object and move. Such a robot can estimate the pose of the object with high accuracy by observing the object at a plurality of viewpoints by a sensor such as a camera.
However, when the object is observed at a plurality of randomly set viewpoints, these viewpoints are not necessarily the optimal positions for observing the object. For example, it is difficult to recognize the object depending on the viewpoint, and thus there could be a problem, for example, the recognition accuracy degrades.
On the other hand, the viewpoint recommendation apparatus according to the embodiment of the present disclosure estimates, from a first viewpoint at which the object is observed first, an optimal second viewpoint at which the object is to be observed next. By doing so, it is possible to estimate the optimum viewpoint of the object, and when the object is observed at the optimum viewpoint, the pose of the object can be estimated with high accuracy.
Note that a main hardware configuration of the viewpoint recommendation apparatus 1 includes a microcomputer composed of, for example, a CPU (Central Processing Unit) that performs calculation processing, etc., a memory composed of a ROM (Read Only Memory) and a RAM (Random Access Memory) storing a calculation program, etc. executed by the CPU, an interface unit (I/F) that inputs and outputs signals to and from the outside. The CPU, the memory, and the interface unit are connected to one another through a data bus or the like.
The image acquisition unit 2 is a specific example of image acquisition means. The image acquisition unit 2 acquires an image of the object in chronological order at the first viewpoint. The image acquisition unit 2 is composed of, for example, a distance sensor such as an RGB camera, an infrared camera or the like, but is not limited to this. The image acquisition unit 2 may be composed of any sensor as long as it can acquire an image of the object. The image acquisition unit 2 outputs the image of the object acquired at the first viewpoint to the image feature extraction unit 7.
The first deep layer learning unit 3 is a specific example of first learning means. The first and second statistical processing units 4 and 5 are specific examples of second and third learning means, respectively. The first deep layer learning unit 3 is composed of a neural network such as CNN (Convolutional Neural Network), RNN (Recurrent Neural Network), etc. This RNN includes an LSTM (Long Short Term Memory) as an intermediate layer.
Each of the first deep layer learning unit 3 and the first and second statistical processing units 4 and 5 may be composed of a learning device such as an SVM (Support Vector Machine). The first deep layer learning unit 3, the first and second statistical processing units 4 and 5, and the storage unit 6 may be integrally configured. The first deep layer learning unit 3 learns the relationship between image features of each pose of the object using a plurality of sets of images of the object and learning data of the pose of the object and stores a result of the learning (a pose template).
The image feature extraction unit 7 is a specific example of image feature extraction means. The image feature extraction unit 7 uses a filter model stored in the storage unit 6 to extract the image feature of the image of the object from the image of the object acquired by the image acquisition unit 2. The filter model includes, for example, a base vector for dimensional compression by PCA (Principal Component Analysis), LDA (Linear Discriminant Analysis), etc., a differentiation filter for extracting an edge, and a local filter such as a discrete cosine transform, etc.
The image feature extraction unit 7 may automatically extract the image feature of the object from the image of the object acquired by the image acquisition unit 2 using the deep learning unit. In this case, the deep learning unit learns the image feature favorable for recognition using the learning data in advance. The image feature extraction unit 7 outputs the extracted image feature of the image of the object to the pose estimation unit 8.
The pose estimation unit 8 is a specific example of pose estimation means. The pose estimation unit 8 compares the image feature of the object extracted by the image feature extraction unit 7 with the image feature (the pose template) of each pose learned in advance by the first deep learning unit 3 to calculate a likelihood of the pose of the object. Then, the pose estimation unit 8 calculates an estimated pose likelihood map with three axes (x axis, y axis, and z axis) of the pose of the object. Note that the pose estimation unit 8 compares the image feature of the object extracted by the image feature extraction unit 7 with the image feature of each pose of the object stored in advance in the storage unit 6 to calculate the estimated pose likelihood map.
The estimated pose likelihood map is one specific example of a first likelihood map. The estimated pose likelihood map is, for example, map information indicating a relation between the estimated pose (an estimated angle) ξ of the object and a likelihood p(ξ|I1) of this estimated pose. I1 is an image of the object acquired by the image acquisition unit 2 at the first viewpoint.
The viewpoint estimation unit 9 is a specific example of viewpoint estimation means. The viewpoint estimation unit 9 estimates the next second viewpoint of the image acquisition unit 2 based on the estimated pose likelihood map calculated by the pose estimation unit 8.
Here, a cup with a handle is explained as an example of the object.
Note that in this embodiment, the second viewpoint is expressed as an amount of a movement from the first viewpoint, but the present disclosure is not limited to this. For example, like the first viewpoint φ1, the second viewpoint may be expressed as the rotation angle φ2.
The viewpoint estimation unit 9 estimates the next second viewpoint δ2 (hat) by evaluating whether the pose likelihood map of the object shown in
The evaluation function g(⋅) is, for example, a function formula (1) for calculating the variance of the pose likelihood map, and a function formula (2) for calculating the entropy of the pose likelihood map. The kurtosis of the pose likelihood map can be optimally evaluated by using these evaluation functions.
Note that the above evaluation function is merely an example, and any function can be employed as long as it can evaluate the kurtosis of the pose likelihood map.
For example, when the value of the evaluation function g(⋅) of the formula (1) or (2) is the minimum, the kurtosis of the pose likelihood map becomes the maximum. Thus, the viewpoint estimation unit 9 estimates the next second viewpoint δ2 (hat) using the following formula (3). In the following formula (3), δ2 (hat) is indicated by adding a hat symbol over δ2.
[Formula 2]
{circumflex over (δ)}2=argminδ
The viewpoint estimation unit 9 uses the above formula (3) to estimate the second viewpoint δ2 (hat) such that the value of the evaluation function g(p(θ|δ2, I1)) becomes the minimum and the kurtosis of the pose likelihood map becomes the maximum.
The viewpoint estimation unit 9 may estimate at least one second viewpoint δ2 (hat) such that the value of the evaluation function g(p(θ|δ2, I1)) becomes less than or equal to a threshold using the above formula (3). Then, at least one second viewpoint δ2 at which the likelihood of the pose of the object becomes high can be estimated.
When the positive and negative signs of the above evaluation function are reversed, the viewpoint estimation unit 9 may estimate the second viewpoint δ2 (hat) at which the evaluation function g(p(θ|δ2, I1)) becomes the maximum. Further, the viewpoint estimation unit 9 may estimate at least one second viewpoint δ2 (hat) such that the value of the evaluation function g(p(θ|δ2, I1)) becomes larger than or equal to the threshold according to the type of the evaluation function.
Here, latent variables are introduced in p(θ|δ2, I1) in the above formula (1), and the formula (1) is transformed by multiplying and integrating p(θ|δ2, I1), p(φ1|ξ), and p(ξ|I1) like in the following formula (4).
[Formula 3]
p(θ|δ2,I1)=∫p(θ|δ2,Ø1)p(Ø1|I1)dØ1=∫∫p(θ|δ2,Ø1)p(Ø1|ξ)p(ξ|I1)dξdØ1 (4)
As described above, p(ξ|I1) in the formula (4) is an estimated pose likelihood map output from the pose estimation unit 8. The estimated pose likelihood map is one specific example of the second likelihood map. In the formula (4), p(φ1|ξ) is a estimated position accuracy map indicating a relation between the true first viewpoint φ1 and the likelihood p(φ1|ξ) of this first viewpoint when the object is in the estimated pose ξ.
The estimated position accuracy map indicates the estimation accuracy of the pose estimation unit 8 when the image features of the object in the same pose are input to the pose estimation unit 8, and indicates an error distribution for each pose of the object. The estimated position accuracy map is prepared in advance as pairs of true values of the first viewpoints and likelihoods thereof for the object in various poses and for various objects within the same category.
For example, when a cup is viewed from the direction of (1), the position of the handle can be identified, and the pose of the cup can be accurately estimated. Thus, the likelihood of the first viewpoint is high. On the other hand, when the cup is viewed from the direction of (2), the position of the handle cannot be identified, and it is difficult to identify the pose of the cup. Thus, the likelihood of the first viewpoint is low.
The first statistical processing unit 4 learns the relation between the true first viewpoint φ1 and the likelihood p(φ1|ξ) of this first viewpoint when the object is in the estimated pose ξ, and stores the relation as the estimated position accuracy map. For example, in an object coordinate system, pose estimation is performed on the known image I1 at the first viewpoint φ1i and the estimated pose ξ is obtained. The first statistical processing unit 4 learns a correspondence relation between the first viewpoint φ1i and the estimated pose ξ using the data of the plurality of sets of the first viewpoint φ1i and the estimated pose ξ obtained in the manner described above, estimates probability density, and generates the estimated position accuracy map.
In the formula (4), p(θ|δ1, φ1) is a multi-viewpoint likelihood map indicating a relation between the pose θ of the object when the object is observed at the first viewpoint φ1 and the second viewpoint (the amount of the movement) δ2 and the likelihood p(θ|δ2, φ1) of this pose. The multi-viewpoint likelihood map is one specific example of a third likelihood map. The multi-viewpoint likelihood map may be expressed by a set of three quaternions of (φ1, δ2, θ).
The second statistical processing unit 5 learns the relation between the pose θ of the object when the object is observed at the first viewpoint φ1 and the second viewpoint δ2 and the likelihood p(θ|δ2, φ1) of this pose, and stores it as the multi-viewpoint likelihood map.
For example, the second statistical processing unit 5 learns the multi-viewpoint likelihood map p(θ|δ2, φ1) after the image acquisition unit 2 is moved to the second viewpoint based on the true pose θ of the object observed at the first viewpoint φ1i and the second viewpoint (the amount of the movement) δ2. The second statistical processing unit 5 calculates the multi-viewpoint likelihood map by shifting and superimposing p(ξ|φ1) and p(ξ|φ1+δ2) in consideration of the amount of the movement δ2.
In this embodiment, it is estimated how much to move the image acquisition unit 2 from the first viewpoint φ1 when the object is viewed from the first viewpoint φ1, i.e., the second viewpoint (the amount of the movement) δ2 is estimated. First, the pose ξ of the object at the first viewpoint φ1 is estimated with the image I1, and this estimated pose ξ is expressed as the estimated pose likelihood map p (ξ|I1). Since it is unknown whether this estimated pose ξ is correct, the estimated position accuracy map p(φ1|ξ) shows how accurate this estimated pose is. Furthermore, the multi-viewpoint likelihood map p(θ|δ2, φ1) represents how accurate the pose of the object is when the first viewpoint φ1 and a certain second viewpoint δ2 are given. By multiplying and integrating these likelihood maps, it is possible to cover every pattern and express the likelihood p(θ|δ2, I1) of the pose of the object.
In particular, the first viewpoint φ1 is introduced by transforming the above formula (4). The first viewpoint φ1 is expressed as a likelihood because it is unknown. The second viewpoint is set according to the first viewpoint φ1, the value of the evaluation function is evaluated, and thus the optimum second viewpoint can be calculated. By observing the object at the calculated optimum second viewpoint, it is possible to estimate the pose of the object with high accuracy.
In this embodiment, as described above, the viewpoint estimation unit 9 estimates the second viewpoint δ2 (hat) so that the value of the evaluation function g(⋅) becomes the minimum. The evaluation function g(⋅) here uses, as a parameter, a result of multiplying and integrating the estimated pose likelihood map p(ξ|I1) calculated by the pose estimation unit 8, the estimated position accuracy map p(φ1|ξ) learned by the first statistical processing unit 4, and the multi-viewpoint likelihood map p(θ|δ2, φ1) learned by the second statistical processing unit 5.
Then, as described above, a result of multiplying and integrating the estimated pose likelihood map p(ξ|I1) calculated by the pose estimation unit 8, the estimated position accuracy map p(φ1|ξ) learned by the first statistical processing unit 4, and the multi-viewpoint likelihood map p(θ|δ2, φ1) learned by the second statistical processing unit 5 indicates the likelihood of the pose θ of the object. Therefore, by estimating the second viewpoint δ2 so that the value of the evaluation function g(⋅) becomes the minimum, i.e., so that the kurtosis of the likelihood distribution of the pose θ of the object becomes the maximum, it is possible to estimate the pose of the object with high accuracy.
For example, the viewpoint estimation unit 9 estimates the second viewpoint δ2 (hat) using the following formula (5) based on the estimated pose likelihood map p(ξ|I1) from the pose estimation unit 8, the estimated position accuracy map p(φ1|ξ) learned by the first statistical processing unit 4, and the multi-viewpoint likelihood map p(θ|δ2, φ1) learned by the second statistical processing unit 5.
[Formula 4]
{circumflex over (δ)}2=argminδ
The estimated position accuracy map p(φ1|ξ) and the multi-viewpoint likelihood map p(θ|δ2, φ1) may be stored in advance in the storage unit 6. In this case, the viewpoint estimation unit 9 estimates the second viewpoint δ2 (hat) based on the estimated pose likelihood map p(ξ|I1) from the pose estimation unit 8, and the estimated position accuracy map p(φ1|ξ) and the multi-viewpoint likelihood map p(θ|δ2, φ1) stored in the storage unit 6.
The viewpoint estimation unit 9 may calculate a function f(δ2=f(ξ)) indicating the relation between the estimated pose ξ of the object and the estimated second viewpoint β2. The function (map) f calculates the second viewpoint δ2 at which g(p(θ|δ2, φ1)) is maximized or minimized in the estimated pose ξ. The viewpoint estimation unit 9 calculates δ2=f(ξ) based on the estimated pose ξ of the object to calculate the second viewpoint δ2. Then, the second viewpoint δ2 for the estimated pose ξ of the object can be easily estimated.
Next, a recommendation method performed by the viewpoint recommendation apparatus 1 according to this embodiment is explained in detail.
In the recommendation method performed by the viewpoint recommendation apparatus 1 according to this embodiment, firstly, in a preliminary process, a learning step of learning the image feature in object recognition, the estimated position accuracy map, and the multi-viewpoint likelihood map is executed, and then an estimation step of estimating the second viewpoint using a result of the learning is executed. Firstly, the learning step is explained.
The first deep layer learning unit 3 learns the image feature of each pose of the object using a plurality of sets of images of the object and learning data of the pose of the object (Step S101).
The first statistical processing unit 4 learns the relation between the true first viewpoint φ1 and the likelihood p(φ1|ξ) of this first viewpoint in the estimated pose ξ, and stores a result of the learning as the estimated position accuracy map (Step S102).
The second statistical processing unit 5 learns the relation between the pose θ of the object when the object is observed at the first viewpoint φ1 and the second viewpoint δ2 and the likelihood p(θ|δ2, φ1) of this pose, and stores a result of the learning as the multi-viewpoint likelihood map (Step S103).
Next, the estimation step is explained. The image acquisition unit 2 acquires an image of the object at the first viewpoint (Step S104). The image acquisition unit 2 outputs the acquired image of the object to the image feature extraction unit 7.
The image feature extraction unit 7 extracts the image feature of the object from the image of the object output from the image acquisition unit 2 using the filter model stored in the storage unit 6 (Step S105). The image feature extraction unit 7 outputs the extracted image feature of the object to the pose estimation unit 8.
The pose estimation unit 8 compares the image feature of the object extracted by the image feature extraction unit 7 with the image feature of each pose learned in advance by the first deep learning unit 3, and calculates the likelihood of the pose of the object to calculate the estimated pose likelihood map (Step S106). The pose estimation unit 8 outputs the calculated estimated pose likelihood map to the viewpoint estimation unit 9.
The viewpoint estimation unit 9 determines whether a maximum value of the likelihood of the estimated pose is greater than or equal to a predetermined value based on the estimated pose likelihood map from the pose estimation unit 8 (Step S107). Note that the predetermined value is previously set in, for example, the storage unit 6. The predetermined value is set, for example, based on the estimation accuracy of the desired robot pose.
When the viewpoint estimation unit 9 determines that the maximum value of the likelihood of the estimated pose is greater than or equal to the predetermined value (YES in Step S107), it estimates the second viewpoint 62 using the following formula (5) based on the estimated pose likelihood map from the pose estimation unit 8, the estimated position accuracy map learned by the first statistical processing unit 4, and the multi-viewpoint likelihood map learned by the second statistical processing unit 5 (Step S108).
When the viewpoint estimation unit 9 determines that the maximum value of the likelihood of the estimated pose is not greater than or equal to the predetermined value (NO in Step S107), it ends this process. In this case, since the optimum second viewpoint cannot be estimated at this first viewpoint, it is necessary to move the image acquisition unit 2 to a position where it is easy to observe the object.
For example, the viewpoint recommendation apparatus 1 according to this embodiment may be mounted on, for example, an autonomous mobile robot.
The image acquisition unit 2 is provided on the robot 10's head or the like. The robot 10 is configured as, for example, an articulated humanoid robot. An actuator 11 such as a servomotor is provided at each joint of the robot 10. The control unit 12 can move the image acquisition unit 2 to a desired position by controlling the actuators 11 of the respective joints to drive the respective joints.
The viewpoint estimation unit 9 outputs the estimated second viewpoint 62 to the control unit 12. The control unit 12 controls each actuator 11 to move the image acquisition unit 2 from the first viewpoint 91 to the second viewpoint 52 estimated by the viewpoint estimation unit 9 based on the second viewpoint 62 from the viewpoint estimation unit 9. The image acquisition unit 2 acquires an image of the object at the moved second viewpoint.
The viewpoint recommendation apparatus 1 may be configured not to be mounted on the robot 10. In this case, the viewpoint recommendation apparatus 1 is connected to the robot 10 wirelessly or via a wire. The viewpoint estimation unit 9 transmits the estimated second viewpoint δ2 to the control unit 12 via radio such as Bluetooth (registered trademark), and Wifi (registered trademark).
As described above, in this embodiment, the second viewpoint 62 is estimated so that the value of the evaluation function g(⋅) becomes the maximum or the minimum. The evaluation function g(⋅) here uses, as a parameter, a result of multiplying and integrating the estimated pose likelihood map p(ξ|I1) calculated by the pose estimation unit 8, the estimated position accuracy map p(φ1|ξ) learned by the first statistical processing unit 4, and the multi-viewpoint likelihood map p(θ|δ2, φ1) learned by the second statistical processing unit 5.
Therefore, by estimating the second viewpoint δ2 so that the value of the evaluation function g(⋅) becomes the maximum or the minimum, i.e., so that the kurtosis of the likelihood distribution of the pose θ of the object becomes the maximum, it is possible to estimate the pose of the object with high accuracy.
Although some embodiments of the present disclosure have been described, these embodiments have been presented merely as examples and are not intended to limit the scope of the present disclosure. These novel embodiments can be implemented in various forms other than those described above. Various omissions, substitutions, and changes can be made without departing from the spirit of the present disclosure. These embodiments and modifications of the embodiments are included in the scope and the spirit of the present disclosure and included in the present disclosure described in claims and a scope of equivalents of the present disclosure.
The present disclosure can also be achieved, for example, by causing the CPU to execute a computer program that performs processes shown in
The program can be stored and provided to a computer using any type of non-transitory computer readable media. Non-transitory computer readable media include any type of tangible storage media. Examples of non-transitory computer readable media include magnetic storage media (such as floppy disks, magnetic tapes, hard disk drives, etc.), optical magnetic storage media (e.g. magneto-optical disks), CD-ROM, CD-R, CD-R/W, and semiconductor memories (such as mask ROM, PROM (Programmable ROM), EPROM (Erasable PROM), flash ROM, RAM, etc.).
The program may be provided to a computer using any type of transitory computer readable media. Examples of transitory computer readable media include electric signals, optical signals, and electromagnetic waves. Transitory computer readable media can provide the program to a computer via a wired communication line (e.g. electric wires, and optical fibers) or a wireless communication line.
From the disclosure thus described, it will be obvious that the embodiments of the disclosure may be varied in many ways. Such variations are not to be regarded as a departure from the spirit and scope of the disclosure, and all such modifications as would be obvious to one skilled in the art are intended for inclusion within the scope of the following claims.
Number | Date | Country | Kind |
---|---|---|---|
2018-125200 | Jun 2018 | JP | national |