The present invention relates to a technique for generating a textured surface that is tactilely equivalent to a textured surface of a reference object by controlling a spatial depth distribution.
Controlling the feel of plastic product materials or the like that humans touch on a daily basis is important for the development of easy-to-use products. In recent years, with the development of 3D printer technology, it has become possible to process a complicated spatial pattern on a surface of a material. However, most of the human tactile research, which is the basis of tactile sensation control, is mainly concerned with simple roughness perception (NPL 1), and there are many unclear points about what kind of spatial pattern the human tactile system is sensitive to. In this regard, it has been reported that human tactile discrimination results are explained by the similarity of amplitude spectra when the spatial depth patterns are Fourier transformed, from tactile discrimination experiments using textures with various spatial depth distributions (NPL 2).
However, NPL 2 merely reveals factors defining the human tactile perceptions, and there is no known technique for generating a textured surface that is tactilely equivalent to a textured surface of a reference object based on this knowledge.
An object of the present invention is to generate a textured surface that is tactilely equivalent to a textured surface of a reference object.
By performing, on an original image having pixel values representing depths of positions on a textured surface of a target object, a conversion for making an element histogram of a steerable pyramid of the original image the same as or similar to an element histogram of a steerable pyramid of a reference image having pixel values representing depths of positions on a textured surface of a reference object, for each spatial frequency band and each orientation band, histogram modulated images are obtained, and the histogram modulated images obtained for the spatial frequency bands and orientation bands are synthesized to obtain an image corresponding to a modulated image having pixel values representing depths of positions on a textured surface of a modulated object.
This makes it possible to generate a textured surface that is tactilely equivalent to a textured surface of a reference object.
Hereinafter, embodiments of the present invention will be described in detail with reference to the drawings.
In the following, in a case where the distribution of concavities and convexities on a textured surface (e.g., wood surface) (reference texture) of a desired reference object (e.g., wood), that is, the spatial depth distribution (height distribution) of the textured surface is known, a textured surface (new texture) of a new modulated object that gives a feel equivalent (tactilely equivalent) to the reference texture is generated. An object is, for example, an object having a certain surface area, such as a natural object such as wood or stone, or an attachment such as tile, wall, seal, package, or the like, and is an object that gives a feel with a three-dimensional spatial structure (two-dimensional distribution of convexities and concavities) that characterizes the object within a predetermined surface area.
In the present embodiment, a texture synthesis technique is utilized to generate a new texture that gives a feel equivalent to the spatial depth distribution of the reference texture. The texture synthesis technique is a technique for generating an image, but it can be used for generating the depths of a new texture by treating the spatial depth distribution of the textured surface as an image.
In the present embodiment, by matching with a reference image having each pixel value representing the depth of each position on the textured surface of the reference object, a modulated image can be obtained having each pixel value representing the depth of each position on the textured surface (new texture) of a modulated object that is difficult to distinguish from the reference object by tactile perception. Examples of the spatial frequency distribution to be matched include wavelet statistics and Fourier power spectrum. That is, for example, for a reference image, a modulated image having wavelet statistics or Fourier power spectrum cloth which is the same or substantially the same as the wavelet statistics or Fourier power spectrum of the reference image is found or generated. Matching, here, is to find or generate an image having a spatial frequency distribution which is the same or substantially the same as the spatial frequency distribution of a certain image. Note that, here, the term “substantially the same” refers to a case where, for example, spatial frequency distributions of two images are not exactly the same due to the resolutions of the two images or the like.
For example, with an original image having each pixel value representing the depth (height) of each position on the textured surface of any original object (target object), and a reference image having each pixel value representing the depth of each position on the textured surface of a reference object as input, histogram modulated images are obtained by performing a conversion on the original image that makes the element histogram of the steerable pyramid of the original image the same as or similar to the element histogram of the steerable pyramid of the reference image for each spatial frequency band and each orientation band. In addition, the histogram modulated images obtained in this way for each spatial frequency band and each orientation band are synthesized to obtain an image corresponding to a modulated image having each pixel value representing the depth of each position on the textured surface of a modulated object.
Functional Configuration of Texture Generation Apparatus
Hardware and Cooperation Between Hardware and Software
As illustrated in
For example, the CPU 10a writes a program stored in the program region 10da of the auxiliary storage device 10d to the program region 10fa of the RAM 10f in accordance with the operating system (OS) program that has been read. Similarly, the CPU 10a writes data stored in the data region 10db of the auxiliary storage device 10d to the data region 10fb of the RAM 10f. Then, the address on the RAM 10f at which this program or data has been written is stored in the register 10ac of the CPU 10a. The control unit 10aa of the CPU 10a sequentially reads out these addresses stored in the register 10ac, reads out programs and data from the regions on the RAM 10f indicated by the read addresses, causes the calculation unit 10ab to sequentially execute the operations indicated by the programs, and stores the calculation results in the register 10ac. The texture generation apparatus 1 illustrated in
Processing of Texture Generation Apparatus 1
An original image Ic having each pixel value representing the depth of each position on the textured surface of a target object and a reference image Ir having each pixel value representing the depth of each position on the textured surface of a reference object are input to the texture generation apparatus 1. The pixel values of the original image Ic and the reference image Ir represent, for example, the luminance values. In the present embodiment, the original image Ic and the reference image Ir are different from each other. That is, the textured surface of the target object and the textured surface of the reference object differ from each other in spatial arrangement. However, this is an example, and the original image Ic and the reference image Ir may be the same as each other.
The original image Ic is illustrated in
Spatial Frequency Domain Conversion Unit 101
The original image Ic and the reference image Ir are input to the spatial frequency domain conversion unit 101. The spatial frequency domain conversion unit 101 converts the original image Ic into an original image of the spatial frequency domain (spatial frequency domain original image) to output the spatial frequency domain original image Ic{tilde over ( )}, and converts the reference image Ir into a reference image
of the spatial frequency domain (spatial frequency domain reference image) to output the spatial frequency domain reference Ic{tilde over ( )}. Where, the upper right superscript “{tilde over ( )}” of “Ic{tilde over ( )}” or “Ir{tilde over ( )}” should be written directly above “Ic” or “Ir”, but hereinafter may be written in the upper right of “Ic” or “Ir”, due to the limitation on the description notation. The spatial frequency domain original Ic{tilde over ( )} and the spatial frequency domain reference image Ir{tilde over ( )} are two-dimensional arrays having Ic{tilde over ( )} (ωx, ωy) and Ir{tilde over ( )} (ωx, ωy) as elements, respectively. Here, ωx represents the spatial frequency in the horizontal direction, and ωy represents the spatial frequency in the vertical direction. For example, discrete Fourier transform or wavelet transform can be used for the conversion from the original Ic to the spatial frequency domain original Ic{tilde over ( )} and the conversion from the reference image Ir to the spatial frequency domain reference image Ir{tilde over ( )}.
Decomposition Unit 102
The spatial frequency domain original Ic{tilde over ( )} and the spatial frequency domain reference image Ir{tilde over ( )} are input to the decomposition unit 102. The decomposition unit 102 applies a complex steerable filter sequence Ψ to the spatial frequency domain original Ic{tilde over ( )} and the spatial frequency domain reference image to obtain and output a complex steerable pyramid of the spatial frequency domain original Ic{tilde over ( )} and a complex steerable pyramid
of the spatial frequency domain reference image Ir{tilde over ( )}. Here, the steerable filter sequence Ψ includes steerable filters Ψλ,μ corresponding to the spatial frequency band λ and the orientation band μ. Here, λ is an integer index corresponding to a spatial frequency band having a predetermined width, and μ is an integer index corresponding to an orientation band having a predetermined width. The conditions, λmin≤λ≤λmax, μmin≤μ≤μmax, λmin>λmax, μmin>μmax, are satisfied. A smaller λ corresponds to a lower frequency band. For example, by giving values of λ=4 and μ=4, a complex steerable pyramid can be obtained for an image having a spatial pixel size of 256 pixels×256 pixels. As described below, the decomposition unit 102 multiplies each of the spatial frequency domain original Ic{tilde over ( )} and the spatial frequency domain reference image Ir{tilde over ( )} by the steerable filter Ψλ,μ for all combinations of λ and μ to obtain and output a complex steerable pyramid
of spatial frequency domain original Ic{tilde over ( )} and a complex steerable pyramid
of the spatial frequency domain reference image corresponding to each spatial frequency band λ and each orientation band μ.
=
Ψλ,μ
=
Ψλ,μ
In the following, due to the limitation of the description notation, the complex steerable pyramid of the spatial frequency domain original Ic{tilde over ( )} may be denoted as S{tilde over ( )}cλ,μ, and the complex steerable pyramid of the spatial frequency domain reference Ir{tilde over ( )} may be denoted as S{tilde over ( )}rλ,μ.
Spatial Domain Conversion Unit 103
The complex steerable pyramid S{tilde over ( )}cλ,μ of the spatial frequency domain original image Ic{tilde over ( )} and the complex steerable pyramid S{tilde over ( )}rλ,μ of the spatial frequency domain reference Ir{tilde over ( )} are input to the spatial domain conversion unit 103. The spatial domain conversion unit 103 converts the complex steerable pyramid S{tilde over ( )}cλ,μ of the spatial frequency domain original Ic{tilde over ( )} into a complex steerable pyramid S′cλ,μ of the spatial domain, and outputs the real part of the complex steerable pyramid S′cλ,μ of the spatial domain as a steerable pyramid Scλ,μ of the original image. The spatial domain conversion unit 103 converts the complex steerable pyramid S{tilde over ( )}rλ,μ of the spatial frequency domain reference Ir{tilde over ( )} into a complex steerable pyramid S′rλ,μ of the spatial domain, and outputs the real part of the complex steerable pyramid S′rλ,μ of the spatial domain as a steerable pyramid Srλ,μ of the reference image. For example, inverse discrete Fourier transform or inverse wavelet transform can be used for the conversion from the complex steerable pyramids S{tilde over ( )}cλ,μ and S{tilde over ( )}cλ,μ to the complex steerable pyramids S′cλ,μ and S′rλ,μ. The steerable pyramid Scλ,μ and the steerable pyramid Srλ,μ are two-dimensional arrays having Scλ,μ (x, y) and Srλ,μ (x, y) as elements (pixels), respectively.
Histogram Conversion Unit 104
The steerable pyramid Scλ,μ of the steerable pyramid Srλ,μ of the reference image are input to the histogram conversion unit 104. The histogram conversion unit 104 obtains a histogram modulated image Sc′λ,μ by performing a conversion on the original image that makes the element histogram of the steerable pyramid Scλ,μ of the original image the same as or similar to the element histogram of the steerable pyramid Srλ,μ of the reference image for each spatial frequency band λ and each orientation band μ, and outputs the histogram modulated image Sc′λ,μ. That is, in each spatial frequency band λ and each orientation band μ, the element histogram of the histogram modulated image Sc′λ,μ is the same as or similar to the element histogram of the steerable pyramid Srλ,μ of the reference image. Note that the element histogram of the steerable pyramid Scλ,μ is a histogram of the elements Scλ,μ (x, y) of the steerable pyramid Scλ,μ. Similarly, the element histogram of the steerable pyramid Srλ,μ is a histogram of the elements Srλ,μ (x, y) of the steerable pyramid Srλ,μ. That the element histogram of Sc′λ,μ is similar to the element histogram of Srλ,μ means that the similarity between the element histogram of Sc′λ,μ and the element histogram of Srλ,μ is equal to or less than a predetermined threshold value, or means that the distance between the element histogram of Sc′λ,μ and the element histogram of Srλ,μ is equal to or greater than a predetermined threshold value. There is no limitation on the conversion method for making the element histogram of the steerable pyramid Scλ,μ of the original image the same as or similar to the element histogram of the steerable pyramid Srλ,μ of the reference image. Any conversion method of images may be used as long as one of the histograms of the two images is converted to be the same as or similar to the other. As an example, a conversion method using a cumulative distribution function Fcλ,μ (i) and an inverted cumulative distribution function F′rλ,μ (Fcλ,μ (i)) can be used, for example. The cumulative distribution function Fcλ,μ is defined with the element values i=Scλ,μ (x, y) as random variables for each steerable pyramid Scλ,μ of each spatial frequency band λ and each orientation band μ. The cumulative distribution function Fcλ,μ (i) is a function that outputs the cumulative probability density (scalar) of the element values less than or equal to the element value i of the steerable pyramid Scλ,μ, for the input of the element value i of the steerable pyramid Scλ,μ. For example, the cumulative distribution function Fcλ,μ (i) outputs any value from 0 to 1. The inverted cumulative distribution function F′rλ,μ is defined for each steerable pyramid Srλ,μ of each spatial frequency band λ and each orientation band μ. The inverted cumulative distribution function F′rλ,μ (Fcλ,μ (i)) outputs the element value i′ (converted element value i′) of the image in which the cumulative probability density less than or equal to the element value i is Fcλ,μ (i) for the input of the cumulative probability density Fcλ,μ (i). The cumulative distribution function Fcλ,μ (i) and the inverted cumulative distribution function F′rλ,μ (Fcλ,μ (i)) convert the element value i of the input steerable pyramid Scλ,μ into the converted element value i′ for each spatial frequency band λ and each orientation band μ, and outputs the histogram modulated image Sc′λ,μ, which is an image having the converted element values i′ as elements.
Spatial Frequency Domain Conversion Unit 105
The histogram modulated image Scλ,μ is input to the spatial frequency domain conversion unit 105. The spatial frequency domain conversion unit 105 converts the histogram modulated image Sc′λ,μ into a histogram modulated image of the spatial frequency domain (spatial frequency domain histogram modulated image) and outputs the spatial frequency domain histogram modulated image S{tilde over ( )}c′λ,μ. It should be noted that the superscript “{tilde over ( )}” of “Sc′λ,μ” should be written directly above the entire “Sc′λ,μ”, but due to the limitation of the description notation, it is written in the upper right of “S”. For example, discrete Fourier transform or wavelet transform can be used for the conversion from the histogram modulated image Sc′λ,μ to the spatial frequency domain histogram modulated image S{tilde over ( )}c′λ,μ.
Reconstruction Unit 106
The spatial frequency domain histogram modulated image S{tilde over ( )}c′λ,μ is input to the reconstruction unit 106. The reconstruction unit 106 synthesizes the spatial frequency domain histogram modulated image S{tilde over ( )}c′λ,μ of all the spatial frequency bands λ and the orientation bands μ to obtain a spatial frequency domain modulated image Ic′{tilde over ( )} and outputs the spatial frequency domain modulated image Ic′{tilde over ( )}. The reconstruction unit 106 obtains, for example, the spatial frequency domain modulated image Ic′{tilde over ( )} as follows.
Alternatively, the reconstruction unit 106 may apply the steerable filter sequence Ψ described above to the spatial frequency domain histogram modulated image S{tilde over ( )}c′λ,μ to obtain the spatial frequency domain modulated image Ic′{tilde over ( )} as follows.
Spatial Domain Conversion Unit 107
The spatial frequency domain modulated image Ic′{tilde over ( )} is input to the spatial domain conversion unit 107. The spatial domain conversion unit 107 converts the spatial frequency domain modulated image Ic′{tilde over ( )} into a modulated image Ic′ of the spatial domain and outputs the modulated image Ic′. Here, the modulated image Ic′ is a two-dimensional array having Ic. (x, y) as elements (pixels). For example, inverse discrete Fourier transform or inverse wavelet transform can be used for the conversion from the spatial frequency domain modulated image Ic′{tilde over ( )} to the modulated image Ic′.
Modulated Object
The element values (pixel values) of the modulated image Ic′ obtained as described above are used as the depth values (height values) when the modulated image Ic′ is three-dimensionally represented, for example, for 3D printing. That is, in a case where a three-dimensional representation of the modulated image Ic′ is realized by 3D printing, the information representing the modulated image Ic′ is input to the 3D printer, and the 3D printer prints the pixel values of the modulated image Ic′ as the depth of each position on the textured surface in 3D to obtain an object (modulated object) with the textured surface having the depth of each position represented by each pixel value of the modulated image L. Each position on the textured surface of the modulated object has the depth represented by the value of each position of the modulated image Ic obtained by matching the original image Ic to the reference image Ir. Matching, here, is to find or generate an image having a spatial frequency distribution which is the same as or substantially the same as the spatial frequency distribution of a certain image. That is, the textured surface of the modulated object is difficult to distinguish from the textured surface of the reference object by tactile perception. In a case where the original image Ic and the reference image Ir are mutually different images, the spatial arrangement of the concave-convex pattern on the textured surface of the modulated object is different from the spatial arrangement of the value of the depth at each point on the textured surface of the reference object. Here, the spatial arrangement of the concave-convex pattern means the spatial arrangement of the value of the depth (value of height) at each point on the textured surface of the modulated object, that is, what z value (depth (height)) is taken by the point (pixel) with a certain xy coordinate value in the image. That is, the modulated object has the depth at each position with a spatial frequency distribution matching the spatial frequency distribution of the value representing the depth of each position on the textured surface of the reference object, and has a textured surface different from the spatial arrangement of the value of the depth at each point on the textured surface of the reference object. For example, a modulated object is generated having a textured surface (generated from the original image and the reference image) being the same in the value and frequency of the depth (height) (the histogram of the pixel values in an image where the depth is treated as the luminance value (pixel value)) as well as the spatial periodicity of the depth (height) (Amplitude spectrum in an image where the depth is treated as the luminance value (pixel value)) and different in the spatial arrangement of the depth (height), as compared to the textured surface of the reference object. The textured surface (new textured surface) of such a modulated object is tactilely perceived equivalently (as a touch), even if perceived visually differently from the textured surface of the reference object (reference textured surface).
Experiment
As shown in
Note that M1 is the same image as O1. Objects for tactile stimulation (material with length×width×thickness: 40 mm×40 mm×10 to 12 mm) were created by using the element values of each of these 10 images (O1 to O5, M1 to M5), as the depths of the concavities and convexities in 3D printing (
Supplement
As described above, in the embodiment, the spatial frequency domain conversion unit 101 converts the input original Ic and the reference image Ir into the spatial frequency domain to obtain and output the spatial frequency domain original image Ic{tilde over ( )} and the spatial frequency domain reference image Ir{tilde over ( )}. The decomposition unit 102 uses the input spatial frequency domain original Ic{tilde over ( )} and the spatial frequency domain reference Ic{tilde over ( )} to obtain and output the complex steerable pyramid S{tilde over ( )}cλ,μ of the spatial frequency domain original image and the complex steerable pyramid S{tilde over ( )}rλ,μ of the spatial frequency domain reference image for each spatial frequency band λ and each orientation band μ. The spatial domain conversion unit 103 converts the input complex steerable pyramid S{tilde over ( )}cλ,μ of the spatial frequency domain original image and the input complex steerable pyramid S{tilde over ( )}rλ,μ of the spatial frequency domain reference image into the spatial domain to obtain the complex steerable pyramid S′cλ,μ of the spatial domain of the original image and the complex steerable pyramid S′rλ,μ of the spatial domain of the reference image for each spatial frequency band λ and each orientation band μ, obtain the real part of the complex steerable pyramid S′cλ,μ of the spatial domain of the original image as the steerable pyramid Scλ,μ of the original image, and obtain the real part of the complex steerable pyramid S′cλ,μ of the spatial domain of the reference image as the steerable pyramid Srλ,μ of the reference image, and outputs the steerable pyramid Scλ,μ of the original image Ic and the steerable pyramid Srλ,μ of the reference image. The histogram conversion unit 104 uses the input steerable pyramid Srλ,μ of the reference image and the input steerable pyramid Scλ,μ of the original image to obtain and output the histogram modulated image Sc′λ,μ. The histogram modulated image Sc′λ,μ is obtained by performing a conversion on the original Ic that makes the element histogram of the steerable pyramid Scλ,μ of the original image the same as or similar to the element histogram of the steerable pyramid Srλ,μ of the reference image for each spatial frequency band λ and each orientation band μ. The spatial frequency domain conversion unit 105 converts the input histogram modulated image Sc′λ,μ into the spatial frequency domain to obtain and output the spatial frequency domain histogram modulated image S{tilde over ( )}c′λ,μ for each spatial frequency band λ and each orientation band μ. The reconstruction unit 106 synthesizes the input spatial frequency domain histogram modulated image Sc′λ,μ to obtain and output the spatial frequency domain modulated image Ic′{tilde over ( )}. The spatial domain conversion unit 107 converts the input spatial frequency domain modulated image Ic′{tilde over ( )} into the spatial domain to obtain and output the modulated image Ic′ of the spatial domain.
The textured surface of the modulated object whose depth at each position is represented by each pixel value of the modulated image Ic′ is tactilely equivalent to the textured surface of the reference object. In the analysis based on the Fourier transform of NPL 2, in a case where the spatial sizes of the textured surface of the reference object and the textured surface of the target object are different, the amplitude spectrum cannot be uniquely matched, and a textured surface that is tactilely equivalent to the textured surface of the reference object cannot be generated. On the other hand, in the present embodiment, even if the space sizes of the textured surface of the reference object and the textured surface of the target object are different, a textured surface that is tactilely equivalent to the textured surface of the reference object can be generated.
In the prior arts, basic skin performance has been investigated using homogeneous stimuli (for example, using repeating with only frequencies in a narrow band), and literature says, “it is possible to distinguish the difference even in nanometers.” Regarding the distribution of receptors, there are anatomical values such as “140 units/cm2” (1 mm interval or more). The inventors have found that the skin with such fine performance can be tricked with illusion that “different signals can be encoded at the receptor and nerve levels, but for some reason they feel the same when decoded.” The present invention is based on this discovery. It should be noted that the finer texture feeling can be presented as the resolution of the textured surface of the object output from the 3D printer increases. For example, it is preferable to carry out with a printer capable of producing a resolution finer than 0.25 mm.
The algorithm described above is merely an example, and the spatial frequency distribution of the pixel values of the original image may be matched with that of the reference image in other ways. For example, as illustrated in
Here, the conversion unit 21 may receive an input of the original Ic and the reference image Ir as described above, may receive an input of the spatial frequency domain original Ic{tilde over ( )} and the spatial frequency domain reference image Ir{tilde over ( )}, may receive an input of the complex steerable pyramid S{tilde over ( )}cλ,μ of the spatial frequency domain original image Ic and the complex steerable pyramid S{tilde over ( )}rλ,μ of the spatial frequency domain reference image, or may receive an input of the steerable pyramid Scλ,μ of the original image Ic and the steerable pyramid Srλ,μ of the reference image. The synthesis unit 22 may output the modulated image Ic′, or may output the spatial frequency domain modulated image Ic′{tilde over ( )}, as an image corresponding to the modulated image Ic′.
That is, by matching with a reference image having each pixel value representing the depth of each position on the textured surface of a reference object (for example, the spatial frequency distribution is wavelet statistics, or Fourier power spectrum), an image can be obtained corresponding to a modulated image having each pixel value representing the depth of each position on the textured surface of a modulated object that is difficult to distinguish from the reference object by tactile perception. Any method may be used as long as the image is obtained by such a method described above. For example, by performing a conversion on the original image that makes wavelet statistics of the spatial frequency distribution of the original image having each pixel value representing the depth of each position on the textured surface of the target object the same as or substantially the same as wavelet statistics of the reference image having each pixel value representing the depth of each position on the textured surface of the reference object, an image may be obtained corresponding to a modulated image having each pixel value representing the depth of each position on the textured surface of a modulated object that is difficult to distinguish from the reference object by tactile perception.
Other Modifications and Others
Note that the present invention is not limited to the above-described embodiment. For example, as described above, the texture generation apparatus is an apparatus embodied by a general purpose or dedicated computer including a processor (hardware processor) such as a CPU or a memory such as an RAM/ROM, or the like, executing a predetermined program. The computer may include a single processor or memory, or may include a plurality of processors or memories. This program may be installed on the computer, or may be recorded in the ROM or the like in advance. Some or all of the processing units may be implemented by using an electronic circuit that independently realize a processing function, instead of using an electronic circuit (circuitry) that realize a functional configuration with a program loaded into the electronic circuit, such as a CPU. Electronic circuits constituting one device may include a plurality of CPUs.
The various processes described above may be executed not only in chronological order as described but also in parallel or on an individual basis as necessary or depending on the processing capabilities of the apparatuses that execute the processing. It is needless to say that the present invention can appropriately be modified without departing from the gist of the present invention.
When the configuration described above is realized by a computer, processing details of functions that each device should have are described by the program. In addition, when the program is executed by the computer, the processing functions described above are implemented on the computer. The program in which the processing details are described can be recorded on a computer-readable recording medium. An example of a computer-readable recording medium is a non-transitory recording medium. Examples of such a recording medium include a magnetic recording device, an optical disc, a magneto-optical recording medium, and a semiconductor memory.
In addition, the program is distributed, for example, by selling, transferring, or lending a portable recording medium such as a DVD or a CD-ROM with the program recorded on it. Further, the program may be stored in a storage device of a server computer and transmitted from the server computer to another computer via a network, so that the program is distributed.
For example, a computer executing the program first temporarily stores the program recorded on the portable recording medium or the program transmitted from the server computer in its own storage device. When executing the processing, the computer reads the program stored in its own storage device and executes the processing in accordance with the read program. Further, as another execution mode of this program, the computer may directly read the program from the portable recording medium and execute processing in accordance with the program, or, further, may sequentially execute the processing in accordance with the received program each time the program is transferred from the server computer to the computer. The above-described processing may be executed by a so-called application service provider (ASP) service in which processing functions are implemented just by issuing an instruction to execute the program and obtaining results without transmitting the program from the server computer to the computer. Further, the program in the embodiment is assumed to include information which is provided for processing of a computer and is equivalent to a program (data or the like that has characteristics of regulating processing of the computer rather than being a direct instruction to the computer).
In addition, although the apparatus is embodied by executing a predetermined program on a computer in the embodiment, at least a part of the processing details may be implemented by hardware.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/JP2019/045084 | 11/18/2019 | WO |