The present invention relates to a position detection system, a position detection method, an information storage medium, an image generation device, and the like.
A gun game that allows the player to enjoy shooting a target object displayed on a screen using a gun-type controller has been popular. When the player (operator) has pulled the trigger of the gun-type controller, the shot impact position (pointing position) is optically detected utilizing an optical sensor provided in the gun-type controller. It is determined that the target object has been hit when the target object is present at the detected impact position, and it is determined that the target object has not been hit when the target object is not present at the detected impact position. The player can virtually experience shooting by playing the gun game,
JP-A-8-226793 and JP-A-11-319316 disclose a related-art position detection system used for such a gun game.
In JP-A-8-226793, at least one target is provided around the display screen. The position of the target is detected from the acquired image, and the impact position of the gun-type controller is detected based on the detected position of the target. In JP-A-11-319316, the frame of the monitor screen is displayed, and the impact position of the gun-type controller is detected based on the detected position of the frame.
According to these related-art technologies, however, since the impact position is detected based on the position of the target or the frame, the detection accuracy decreases. Moreover, a calibration process for specifying the position of the target is required as the initial setting before starting the game. This process is troublesome for the player.
Digital watermarking technology that embeds secret data in an image has been known. However, position detection data has not been embedded by digital watermarking technology, and digital watermarking technology has not been applied to a position detection system for a gun game or the like.
According to one aspect of the invention, there is provided a position detection system that detects a pointing position, the position detection system comprising:
an image acquisition section that acquires an image from an imaging device when the imaging device has acquired an image of an imaging area corresponding to the pointing position from a display image, the display image being generated by embedding a marker image as a position detection pattern in an original image; and
a position detection section that performs a calculation process that detects the marker image embedded in the acquired image based on the acquired image to determine the pointing position corresponding to the imaging area.
According to another aspect of the invention, there is provided a position detection method comprising:
generating a display image by embedding a marker image as a position detection pattern in an original image, and outputting the generated display image to a display section;
detecting the marker image embedded in an image acquired from the display image based on the acquired image;
determining a pointing position corresponding to an imaging area of the acquired image; and
performing a calculation process based on the determined pointing position.
According to another aspect of the invention, there is provided a computer-readable information storage medium storing a program that causes a computer to implement the above position detection method.
According to another aspect of the invention, there is provided an image generation device comprising:
an image generation section that generates a display image by embedding a marker image as a position detection pattern in an original image, and outputs the generated display image to a display section; and
a processing section that performs a calculation process based on a pointing position when the marker image embedded in an image acquired from the display image has been detected based on the acquired image and the pointing position corresponding to an imaging area of the acquired image has been determined.
Several aspects of the invention may provide a position detection system, a position detection method, an information storage medium, an image generation device, and the like that can detect a pointing position with high accuracy.
According to one embodiment of the invention, there is provided a position detection system that detects a pointing position, the position detection system comprising:
an image acquisition section that acquires an image from an imaging device when the imaging device has acquired an image of an imaging area corresponding to the pointing position from a display image, the display image being generated by embedding a marker image as a position detection pattern in an original image; and
a position detection section that performs a calculation process that detects the marker image embedded in the acquired image based on the acquired image to determine the pointing position corresponding to the imaging area.
According to this embodiment, an image is acquired from the imaging device when the imaging device has acquired an image from the display image generated by embedding the marker image in the original image. The calculation process that detects the marker image is performed based on the acquired image to determine the pointing position corresponding to the imaging area. According to this configuration, since the pointing position is detected by detecting the marker image embedded in the acquired image, the pointing position can be detected with high accuracy.
In the position detection system,
the display image may be generated by converting each pixel data of the original image using each pixel data of the marker image.
According to this configuration, the marker image can be embedded while maintaining the appearance (state) of the original image.
In the position detection system,
the display image may be generated by converting at least one of R component data, G component data, B component data, color difference component data, and brightness component data of each pixel of the original image using each pixel data of the marker image.
According to this configuration, the data of the marker image can be embedded in the R component data, G component data, B component data, color difference component data, or brightness component data of each pixel of the original image.
In the position detection system,
the marker image including pixel data may have a unique data pattern in each segmented area of the display image.
According to this configuration, the pointing position can be specified by utilizing the data pattern of the marker image that is unique in each segmented area.
In the position detection system,
each pixel data of the marker image may be generated by random number data using a maximal-length sequence.
According to this configuration, the unique data pattern of the marker image can be generated by a simple method.
In the position detection system,
the imaging device may acquire an image of the imaging area that is smaller than a display area of the display image.
This makes it possible to relatively increase the resolution as compared with the case of acquiring an image of a large area, even if the number of pixels of the imaging device is small, so that the pointing position detection accuracy can be improved.
In the position detection system,
the position detection section may calculate a cross-correlation between the acquired image and the marker image, and may determine the pointing position based on the cross-correlation calculation results.
According to this configuration, the pointing position can be detected with high accuracy by performing the cross-correlation calculation process on the acquired image and the marker image.
In the position detection system,
the position detection section may perform a high-pass filter process on the cross-correlation calculation results or the marker image.
This makes it possible to reduce the power of noise due to the original image, so that the detection accuracy can be improved.
The position detection system may further comprise:
a reliability calculation section that calculates the reliability of the cross-correlation calculation results based on a maximum cross-correlation value and a distribution of cross-correlation values.
This makes it possible to implement various processes utilizing the determined reliability.
The position detection system may further comprise:
an image correction section that performs an image correction process on the acquired image,
the position detection section may determine the pointing position based on the acquired image that has been subjected to the image correction process by the image correction section.
This makes it possible to implement an appropriate position detection process even if the positional relationship with the imaging device has changed, for example.
According to another embodiment of the invention, there is provided a position detection method comprising:
generating a display image by embedding a marker image as a position detection pattern in an original image, and outputting the generated display image to a display section;
detecting the marker image embedded in an image acquired from the display image based on the acquired image;
determining a pointing position corresponding to an imaging area of the acquired image; and
performing a calculation process based on the determined pointing position.
According to another embodiment of the invention, there is provided a computer-readable information storage medium storing a program that causes a computer to implement the above position detection method.
According to this embodiment, the display image is generated by embedding the marker image in the original image, and displayed on the display section. When the marker image has been detected based on the acquired image acquired from the display image, and the pointing position has been determined, various calculation processes are performed based on the determined pointing position. Since the pointing position is detected by detecting the marker image embedded in the acquired image, the pointing position can be detected with high accuracy, and utilized for various calculation processes.
The position detection method may further comprise:
performing a game process including a game result calculation process based on the pointing position.
This makes it possible to implement the game process (e.g., game result calculation process) utilizing the pointing position that has been determined with high accuracy.
The position detection method may further comprise:
generating the display image by converting each pixel data of the original image using each pixel data of the marker image.
The position detection method may further comprise:
generating the display image by converting at least one of R component data, G component data, B component data, color difference component data, and brightness component data of each pixel of the original image using each pixel data of the marker image.
In the position detection method,
the marker image including pixel data may have a unique data pattern in each segmented area of the display image.
In the position detection method,
each pixel data of the marker image may be generated by random number data using a maximal-length sequence.
The position detection method may further comprise:
changing the marker image with a lapse of time.
This makes it possible to increase the total amount of information included in the marker images, so that the detection accuracy can be improved.
The position detection method may further comprise:
calculating a cross-correlation between the acquired image and the marker image in order to determine the pointing position; and
determining the reliability of the cross-correlation calculation results, and changing the marker image based on the determined reliability.
According to this configuration, since the marker image with high position detection reliability is embedded in the original image, the detection accuracy can be improved.
The position detection method may further comprise:
changing the marker image corresponding to the original image.
This makes it possible to embed a marker image appropriate for the appearance (state) of the original image.
The position detection method may further comprise:
acquiring disturbance measurement information; and
changing the marker image based on the disturbance measurement information.
According to this configuration, since an optimum marker image can be embedded based on the disturbance measurement information, an appropriate position detection process can be implemented.
The position detection method may further comprise:
outputting the original image in which the marker image is not embedded as the display image when a given condition has not been satisfied; and
outputting an image generated by embedding the marker image in the original image as the display image when the given condition has been satisfied.
According to this configuration, since an image generated by embedding the marker image in the original image is displayed only when the given condition has been satisfied, the marker image can be rendered inconspicuous so that the quality of the display image can be improved.
The position detection method may further comprise:
generating a position detection original image as the original image when the given condition has been satisfied; and
outputting an image generated by embedding the marker image in the position detection original image as the display image.
This makes it possible to implement an effect utilizing the position detection original image,
The position detection method may further comprise:
outputting an image generated by embedding the marker image in the original image as the display image when it has been determined that a position detection timing has been reached based on instruction information from a pointing device.
According to this configuration, since an image generated by embedding the marker image in the original image is displayed when the position detection timing has been reached based on the instruction information from the pointing device, the marker image can be rendered inconspicuous so that the quality of the display image can be improved.
The position detection method may further comprise:
performing a game process including a game result calculation process based on the pointing position; and
outputting an image generated by embedding the marker image in the original image as the display image when a given game event has occurred during the game process.
According to this configuration, since an image generated by embedding the marker image in the original image is displayed when a given game event has occurred, the marker image can be embedded corresponding to the game event.
The position detection method may further comprise:
determining the pointing position based on the acquired image acquired from the display image when a given condition has been satisfied.
According to this configuration, an image generated by embedding the marker image in the original image may be necessarily displayed irrespective of whether or not the given condition has been satisfied, and the pointing position may be determined by acquiring an acquired image acquired from the display image when the given condition has been satisfied. For example, it may be determined that the given condition has been satisfied when it has been determined that the position detection timing has been reached based on instruction information from the pointing device, or a given game event has occurred during the game process, and the pointing position may be determined using the acquired image acquired at the timing at which the given condition has been satisfied.
Embodiments of the invention are described below. Note that the following embodiments do not in any way limit the scope of the invention laid out in the claims. Note also that all elements of the following embodiments should not necessarily be taken as essential requirements for the invention.
1. Position Detection System
The position detection system 10 includes an image acquisition section 20, an image correction section 22, a position detection section 24, and a reliability calculation section 26. Note that the position detection system 10 according to this embodiment is not limited to the configuration shown in
The image acquisition section 20 acquires an image (acquired image) acquired (photographed) by a camera 12 (imaging device in a broad sense). Specifically, the image acquisition section 20 acquires an image from the camera 12 when the camera 12 has acquired an image of an imaging area IMR corresponding to a pointing position PP (i.e., imaging position) from the display image generated by embedding the marker image as the position detection pattern in the original image (i.e., a composite image of the original image and the marker image).
The pointing position PP is a position within the imaging area IMR, for example. The pointing position PP may be a center position (gaze point position) of the imaging area IMR, a corner position of the imaging area IMR, or the like. Note that
Specifically, the imaging device included in the camera 12 acquires an image of the imaging area IMR that is smaller than the display area of the display image. The image acquisition section 20 acquires the image acquired by the imaging device, and the position detection section 24 detects the pointing position PP based on the acquired image of the imaging area IMR. This makes it possible to relatively increase the resolution even if the number of pixels of the imaging device is small, so that the pointing position PP can be detected with high accuracy.
The image correction section 22 performs an image correction process on the acquired image. For example, the image correction section 22 performs at least one of a rotation process and a scaling process on the acquired image. For example, the image correction section 22 performs an image correction process (e.g., rotation process or scaling process) that cancels a change in pan or tilt of the camera 12, rotation of the camera 12 around the visual axis, or the distance between the camera 12 and the display screen. For example, a sensor that detects rotation or the like may be provided in the camera 12, and the image correction section 22 may correct the acquired image based on information detected by the sensor. Alternatively, the image correction section 22 may detect the slope of a straight area (e.g., pixel or black matrix) of the display screen based on the acquired image, and may correct the acquired image based on the detection results.
The position detection section 24 detects the pointing position PP (indication position) based on the acquired image (e.g., the acquired image that has been subjected to the image correction process). For example, the position detection section 24 performs a calculation process that detects the marker image embedded in the acquired image based on the acquired image to determine the pointing position PP (indication position) corresponding to the imaging area IMR.
The calculation process performed by the position detection section 24 includes an image matching process that determines the degree of matching between the acquired image and the marker image. For example, the position detection section 24 performs the image matching process on the acquired image and each segmented area of the marker image, and detects the position of the segmented area for which the degree of matching becomes a maximum as the pointing position PP.
Specifically, the position detection section 24 calculates the cross-correlation between the acquired image and the marker image as the image matching process. The position detection section 24 determines the pointing position PP based on the cross-correlation calculation results (cross-correlation value or maximum cross-correlation value). In this case, the position detection section 24 may perform a high-pass filter process on the cross-correlation calculation results or the marker image. This makes it possible to utilize only a high-frequency region of the cross-correlation calculation results, so that the detection accuracy can be improved. Specifically, the original image is considered to have a high power in a low-frequency region. Therefore, the detection accuracy can be improved by removing a low-frequency component using the high-pass filter process.
The reliability calculation section 26 performs a reliability calculation process. For example, the reliability calculation section 26 calculates the reliability of the results of the image matching process performed on the acquired image and the marker image. The reliability calculation section 26 outputs the information about the pointing position as normal information when the reliability is high, and outputs error information or the like when the reliability is low. For example, when the position detection section 24 performs the cross-correlation calculation process as the image matching process, the reliability calculation section 26 calculates the reliability of the cross-correlation calculation results based on the maximum cross-correlation value and the distribution of the cross-correlation values.
In the first comparative example, it is necessary to provide the infrared LED 501 to 504 in addition to the camera 12. This results in an increase in cost or the like. Moreover, a calibration process (i.e., initial setting) must be performed before the player starts the game so that the camera 12 can recognize the positions of the infrared LED 501 to 504. This process is troublesome for the player. Since the pointing position is detected based on a limited number of infrared LED 501 to 504, the detection accuracy and the disturbance resistance decrease.
On the other hand, since the position detection method according to this embodiment makes it unnecessary to provide the infrared LED 501 to 504 shown in
In the second comparative example, when an image of an imaging area IMR1 indicated by B1 in
In order to solve this problem, the position detection method according to this embodiment provides a marker image (position detection pattern) shown in
When an image of the imaging area IMR1 has been acquired by the camera 12 (imaging device) (see C1 in
As schematically shown in
As indicated by D1 in
In this case, the matching process is performed on the marker image embedded in the acquired image and the marker image set to the corresponding segmented area instead of performing the matching process on the acquired image and the original image. Specifically, when the marker image embedded in the acquired image of the imaging area IMR indicated by D1 in
2. Position Detection Process
An example of the position detection process is described below. Note that the position detection process according to this embodiment is not limited to the following method. It is possible to implement various modifications using various image matching processes.
2.1 Position Detection Using M-Array Marker Image
The data pattern of the marker image may be set using maximal-length sequence random numbers. Specifically, each pixel data of the marker image is set using an M-array (two-dimensionally extended maximal-length sequence). Note that the random number data used to generate the marker image data is not limited to the maximal-length sequence. For example, various PN sequences (e.g., Gold sequence) may be used.
The maximal-length sequence is a code sequence that is generated by shift registers having a given number of stages and feedback and has the maximal cycle. For example, the cycle of a kth-order (k corresponds to the number of stages of shift registers) maximal-length sequence is expressed by L=2k−1. The M-array is a two-dimensional array of maximal-length sequence random numbers.
Specifically, Kth-order maximal-length sequences a0 to aL-1 are generated, and disposed in the M-array (i.e., an array of M rows and N columns) in accordance with the following rule.
(I) The sequence a0 is disposed at the upper left corner of the M-array.
(II) The sequence a1 is disposed at the lower right of the sequence a0. The subsequent sequence is sequentially disposed at the lower right of the preceding sequence.
(III) The sequences are disposed on the assumption that upper end and the lower end of the array are connected. Specifically, when the lowermost row has been reached, the subsequent sequence is disposed in the uppermost row. Likewise, the sequences are disposed on the assumption that the left end and the right end of the array are connected.
For example, when k=4, L=15, M=3, and N=5, the following M-array of three rows and five columns is generated.
In this embodiment, the M-array thus generated is set to the pixel data of the marker image. The marker image is embedded by converting each pixel data of the original image using each pixel data of the marker image that is set using the M-array.
In this embodiment, the pointing position (imaging position) is detected from the data of the acquired image shown in
2.2 Process Flow
A process flow of the position detection method according to this embodiment is described below using flowcharts shown in
A position corresponding to the maximum cross-correlation value is searched to determine the pointing position (indication position) (step S14). For example, the position indicated by E3 in
The reliability of the pointing position is then calculated (step S15). When the reliability of the pointing position is high, information about the pointing position is output to the image generation device (game device) described later or the like (steps S16 and S17). When the reliability of the pointing position is low, error information is output (step S18).
A high-pass filter process is performed on the two-dimensional DFT results for the marker image (step S23). The original image (e.g., game image) has high power in a low-frequency region. On the other hand, the M-array image has equal power over the entire frequency region. The power of the original image in a low-frequency region serves as noise during the position detection process. Therefore, the power of noise is reduced by reducing the power of a low-frequency region by the high-pass filter process that removes a low-frequency component. This reduces erroneous detection.
Note that the high-pass filter process may be performed on the cross-correlation calculation results. However, when implementing the cross-correlation calculation process by DFT, the process can be performed at high speed by performing the high-pass filter process on the two-dimensional DFT results for the marker image (M-array). When the marker image is not changed in real time, the two-dimensional DFT process on the marker image and the high-pass filter process on the two-dimensional DFT results may be performed once during initialization.
The two-dimensional DFT results for the acquired image obtained in the step S21 are multiplied by the two-dimensional OFT results for the marker image subjected to the high-pass filter process in the step S23 (step S24). An inverse two-dimensional DFT process is performed on the multiplication results to calculate the cross-correlation values shown in
The occurrence probability of the maximum cross-correlation value is calculated on the assumption that the distribution of the cross-correlation values is a normal distribution (step S33). The reliability is calculated based on the occurrence probability of the maximum cross-correlation value and the number of cross-correlation values (step S34).
2.3 cross-correlation calculation process
The details of the cross-correlation calculation process shown in
For example, the two-dimensional DFT (two-dimensional discrete Fourier transform) process on M×N-pixel image data x(m, n) is expressed by the following expression (1).
where, k=0, 1, . . . , M−1, 1=0, 1, . . . , N−1, and i is an imaginary unit.
The two-dimensional OFT process may be implemented using the one-dimensional DFT process. Specifically, the one-dimensional DFT process is performed on each row of the array (image data) x(m, n) to obtain an array X′. More specifically, the one-dimensional DFT process is performed on the first row (0, n) of the array x(m, n), and the results are set in the first row of the array X′ (see the following expression (2)).
Likewise, the one-dimensional DFT process is performed on the second row, and the results are set in the second row of the array X′. The above process is repeated N times to obtain the array X′. The one-dimensional DFT process is then performed on each column of the array X′. The results are expressed by X(k, 1) (two-dimensional DFT results). The inverse two-dimensional DFT process may be implemented by applying the inverse one-dimensional DFT process to each row and each column.
Various fast Fourier transform (FFT) algorithms are known as the one-dimensional DFT process. The two-dimensional DFT process can be implemented at high speed by utilizing such an algorithm.
Note that X(k, 1) corresponds to the spectrum of the image data x(m, n). For example, when performing the high-pass filter process on the image data, a low-frequency component of the array X(k, 1) may be removed. Specifically, since a low-frequency component corresponds to each corner of the array X(k, 1), the high-pass filter process may be implemented by replacing the value at each corner with 0.
A cross-correlation is described below. A cross-correlation R(i, j) between two-dimensional arrays A and B of M rows and N columns is expressed by the following expression (3).
When m+i>M−1, the value is set to m+i−M. Specifically, the right end and the left end of the array B are circularly connected. This also applies to the upper end and the lower end of the array B.
When the arrays A and B are identical M-arrays, only the cross-correlation R(0, 0) has a significantly large value, and other cross-correlations have a value close to 0. When moving the array A by i rows and j columns, only the cross-correlation R(i, j) has a significantly large value. The difference in position between two M-arrays can be determined from the maximum value of the cross-correlation R by utilizing the above properties.
The cross-correlation R may be calculated using the two-dimensional DFT process instead of directly calculating the cross-correlation R using the expression (3). In this case, since a fast Fourier transform algorithm can be used, the process can be performed at high speed as compared with the case of directly calculating the cross-correlation R.
Specifically, the two-dimensional DFT process is performed on the arrays A and B to obtain results A′ and B′. The corresponding values of the results A′ and B′ are multiplied to obtain results C. Specifically, the value in the mth row and the nth column of the results A′ is multiplied (complex-multiplied) by the value in the mth row and the nth column of the results B′ to obtain the value in the mth row and the nth column of the results C. Specifically, C(m, n)=A′(m, n)×B′(m, n). The inverse two-dimensional DFT process is performed on the value C(m, n) to obtain the cross-correlation R(m, n).
2.4 Reliability
The details of the reliability calculation process shown in
The average and the variance of N×M pieces of data included in the cross-correlation R(i, j) are calculated to normalize the cross-correlation R(i, j) (average=0, variance=1). The maximum value of the normalized data (i.e., data corresponding to the pointing position) is referred to as u (see F1 in
For example, the upper probability P(u) in the normal distribution (average=0, variance=1) is expressed by the following expression (4).
The upper probability P(u) is calculated using Shenton's continued fraction expansion shown by the following expression (5), for example.
The reliability s is defined by the following expression (6).
s={1−P(u)}N×M (6)
The reliability s is a value from 0 to 1. The position information (pointing position or imaging position) calculated from the maximum value u has higher reliability as the reliability s becomes closer to 1. The reliability s is not a probability that the position information is accurate. Specifically, the upper probability P(u) corresponds to the probability that the value of the position information occurs when the marker image is not watermarked, and the reliability s is a value corresponding to 1−P(u). For example, the reliability s is expressed by “s={1−P(u)}N×M (see the expression (6)). Specifically, when the reliability s is close to 1, it is likely that the marker image is watermarked. Therefore, it is considered that the watermark of the marker image is detected, and the position information is reliable.
Note that the upper probability P(u) may be directly used as the reliability. Specifically, since the quantitative relationship of the reliability coincides with the quantitative relationship of the upper probability P(u), the upper probability P(u) may be directly used as the reliability when the position information is considered to be reliable only when the reliability is equal to or larger than a given value.
3. Image Generation Device
A configuration example of an image generation device and a pointing device (gun-type controller) to which the position detection system according to this embodiment is applied is described below with reference to
In the example shown in
The gun-type controller 30 includes an indicator 32 (casing) that is formed to imitate the shape of a gun, the trigger 34 that is provided on the grip of the indicator 32, and a lens 36 (optical system) and the imaging device 38 that are provided near the muzzle of the indicator 32. The gun-type controller 30 also includes a processing section 40 and a communication section 50. Note that the gun-type controller 30 (pointing device) is not limited to the configuration shown in
The imaging device 38 is formed by a sensor (e.g., CCD or CMOS sensor) that can acquire an image. The processing section 40 (control circuit) controls the entire gun-type controller, and calculates the indication position, for example. The communication section 50 exchanges data between the gun-type controller 30 and the image generation device 90 (main device). The functions of the processing section 40 and the communication section 50 may be implemented by hardware (e.g., ASIC), or may be implemented by a processor (CPU) and software.
The processing section 40 includes an image acquisition section 42, the image correction section 44, the position detection section 46, and the reliability calculation section 48.
The image acquisition section 42 acquires an image acquired by the imaging device 38. Specifically, the image acquisition section 42 acquires an image from the imaging device 38 when the imaging device 38 has acquired an image of the imaging area IMR corresponding to the pointing position PP from the display image generated by embedding the marker image in the original image. The image correction section 44 performs the image correction process (e.g., rotation process or scaling process) on the acquired image.
The position detection section 46 performs a calculation process that detects the marker image embedded (synthesized) in the acquired image based on the acquired image to determine the pointing position PP corresponding to the imaging area IMR. Specifically, the position detection section 46 calculates a cross-correlation between the acquired image and the marker image to determine the pointing position PP. The reliability calculation section 48 calculates the reliability of the pointing position PP. Specifically, the reliability calculation section 48 calculates the reliability of the pointing position PP based on the maximum cross-correlation value and the distribution of the cross-correlation values.
The image generation device 90 (main device) includes a processing section 100, an image generation section 150, a storage section 170, an interface (IIF) section 178, and a communication section 196. Note that various modifications may be made, such as omitting some of the elements or adding other elements.
The processing section 100 (processor) controls the entire image generation device 90, and performs various processes (e.g., game process) based on data from an operation section of the gun-type controller 30, a program, and the like. Specifically, when the marker image embedded in the acquired image has been detected based on the acquired image of the display image displayed on the display section 190, and the pointing position PP corresponding to the imaging area IMR has been determined, the processing section 100 performs various calculation processes based on the determined pointing position PP. For example, the processing section 100 performs the game process including a game result calculation process based on the pointing position. The function of the processing section 100 may be implemented by hardware such as a processor (e.g., CPU or GPU) or an ASIC (e.g., gate array), or a program.
The image generation section 150 (drawing section) performs a drawing process based on the results of various processes performed by the processing section 100 to generate a game image, and outputs the generated game image to the display section 190. When generating a three-dimensional game image, the image generation section 150 performs a geometric process (e.g., coordinate transformation, clipping, perspective transformation, or light source calculations), and generates drawing data (e.g., primitive surface vertex (constituent point) position coordinates, texture coordinates, color (brightness) data, normal vector, or alpha-value) based on the results of the geometric process, for example. The image generation section 150 draws the object (one or more primitive surfaces) subjected to the geometric process in a drawing buffer 176 (i.e., a buffer (e.g., frame buffer or work buffer) that can store pixel-unit image information) based on the drawing data (primitive surface data). The image generation section 150 thus generates an image viewed from a virtual camera (given viewpoint) in an object space. Note that the image that is generated according to this embodiment and displayed on the display section 190 may be a three-dimensional image or a two-dimensional image.
The image generation section 150 generates a display image by embedding the marker image as the position detection pattern in the original image, and outputs the generated display image to the display section 190. Specifically, a conversion section 152 included in the image generation section 150 generates the display image by converting each pixel data of the original image using each pixel data (M-array) of the marker image. For example, the conversion section 152 generates the display image by converting at least one of R component data, G component data, and B component data of each pixel of the original image, or at least one of color difference component data and brightness component data (YIN) of each pixel of the original image using each pixel data of the marker image.
The image generation section 150 may output the original image as the display image when a given condition has not been satisfied, and may output an image generated by embedding the marker image in the original image as the display image when the given condition has been satisfied. For example, the image generation section 150 may generate a position detection original image (position detection image) as the original image when the given condition has been satisfied, and may output an image generated by embedding the marker image in the position detection original image as the display image. Alternatively, the image generation section 150 may output an image generated by embedding the marker image in the original image as the display image when the image generation section 150 has determined that a position detection timing has been reached based on instruction information (trigger input information) from the gun-type controller 30 (pointing device). The image generation section 150 may output an image generated by embedding the marker image in the original image as the display image when a given game event has occurred during the game process.
The storage section 170 serves as a work area for the processing section 100, the communication section 196, and the like. The function of the storage section 170 may be implemented by a RAM (DRAM or VRAM) or the like. The storage section 170 includes a marker image storage section 172, the drawing buffer 176, and the like.
The interface (I/F) section 178 functions as an interface between the image generation device 90 and an information storage medium 180. The interface (I/F) section 178 accesses the information storage medium 180, and reads a program and data from the information storage medium 180.
The information storage medium 180 (computer-readable medium) stores a program, data, and the like. The function of the information storage medium 180 may be implemented by an optical disk (CD or DVD), a hard disk drive (HDD), a memory (e.g., ROM), or the like. The processing section 100 performs various processes according to this embodiment based on a program (data) stored in the information storage medium 180. Specifically, a program that causes a computer (i.e., a device including an operation section, a processing section, a storage section, and an output section) to function as each section according to this embodiment (i.e., a program that causes a computer to execute the process of each section) is stored in the information storage medium 180.
A program (data) that causes a computer to function as each section according to this embodiment may be distributed to the information storage medium 180 (or storage section 170) from an information storage medium included in a host device (server) via a network and the communication section 196. Use of the information storage medium included in the host device (server) is included within the scope of the invention.
The display section 190 outputs an image generated according to this embodiment. The function of the display section 190 may be implemented by a CRT, an LCD, a touch panel display, or the like.
The communication section 196 communicates with the outside (e.g., gun-type controller 30) via a cable or wireless network. The function of the communication section 196 may be implemented by hardware (e.g., communication ASIC or communication processor) or communication firmware.
The processing section 100 includes a game processing section 102, a change processing section 104, a disturbance measurement information acquisition section 106, and a condition determination section 108.
The game processing section 102 performs various game processes (e.g., game result calculation process). The game process includes calculating the game results, determining the details of the game and the game mode, starting the game when game start conditions have been satisfied, proceeding with the game, and finishing the game when game finish conditions have been satisfied, for example.
For example, the game processing section 102 performs a hit check process based on the pointing position PP detected by the gun-type controller 30. Specifically, the game processing section 102 performs a hit check process on a virtual bullet (shot) fired from the gun-type controller 30 (weapon-type controller) and the target object (target).
More specifically, the game processing section 102 (hit processing section) determines the trajectory of the virtual bullet based on the pointing position PP determined based on the acquired image, and determines whether or not the trajectory intersects the target object disposed in the object space. The game processing section 102 determines that the virtual bullet has hit the target object when the trajectory intersects the target object, and performs a process that decreases the durability value (strength value) of the target object, a process that generates an explosion effect, a process that changes the position, direction, motion, color, or shape of the target object, and the like. The game processing section 102 determines that the virtual bullet has not hit the target object when the trajectory does not intersect the target object, and performs a process that causes the virtual bullet to disappear, and the like. Note that a simple object (bounding volume or bounding box) that simply represents the shape of the target object may be provided, and a hit check between the simple object and the virtual bullet (trajectory of the virtual bullet) may be performed.
The change processing section 104 changes the marker image or the like. For example, the change processing section 104 changes the marker image with the lapse of time. The change processing section 104 changes the marker image depending on the status of the game that progresses based on the game process performed by the game processing section 102, for example. Alternatively, the change processing section 104 changes the marker image based on the reliability of the pointing position PP, for example. When a cross-correlation between the acquired image and the marker image has been calculated, and the reliability of the cross-correlation calculation results has been determined, the change processing section 104 changes the marker image based on the determined reliability. The change processing section 104 may change the marker image corresponding to the original image. For example, when a different game image is generated depending on the game stage, the change processing section 104 changes the marker image depending on the game stage. When using a plurality of marker images, data of the plurality of marker images is stored in the marker image storage section 172.
The disturbance measurement information acquisition section 106 acquires measurement information about a disturbance (e.g., sunlight). Specifically, the disturbance measurement information acquisition section 106 acquires disturbance measurement information from a disturbance measurement sensor (not shown). The change processing section 104 changes the marker image based on the acquired disturbance measurement information. For example, the change processing section 104 changes the marker image based on the intensity, color, or the like of ambient light.
The condition determination section 108 determines whether or not a given marker image change condition has been satisfied due to a shooting operation or occurrence of a game event. The change processing section 104 changes (switches) the marker image when the given marker image change condition has been satisfied. The image generation section 150 outputs the original image as the display image when the given marker image change condition has not been satisfied, and outputs an image generated by embedding the marker image in the original image as the display image when the given marker image change condition has been satisfied.
4. Marker Image Change Process
In this embodiment, the pattern of the marker image embedded in the original image may be changed. In
For example, the marker image MI1 is generated using a first M-array M1, and the marker image MI2 is generated using a second M-array M2. For example, the array shown in
The total amount of information included in the marker image increases by changing the marker image with the lapse of time, so that the detection accuracy can be improved. For example, when the pointing position detection accuracy cannot be increased using the marker image MI1 generated using the first M-array M1 depending on the conditions (e.g., surrounding environment), the detection accuracy can be improved by displaying an image generated by embedding the marker image MI2 generated using the second M-array M2 in the original image. The marker image cannot be changed by a method that embeds the marker image in a printed matter, for example. However, the marker image can be changed by the method according to this embodiment that displays an image generated by embedding the marker image in the original image on the display section 190.
When changing the marker image as shown in
The marker image embedded in the original image may be changed based on the reliability described with reference to
Although
For example, a marker image MI1 having a deep pattern is used in
Specifically, when using the deep pattern shown in
It is desirable that the marker image does not stand out in order to improve the quality of the display image displayed on the display section 190. Therefore, it is desirable to use the light pattern shown in
In this embodiment, the marker image may be changed corresponding to the original image. In
Specifically, since the brightness of the entire original image is high in the daytime stage, the marker image does not stand out even if the marker image having a deep pattern (high brightness) is embedded in the original image. Moreover, since the frequency band of the original image is shifted to the high-frequency side, the position detection accuracy can be improved by embedding the marker image having a deep pattern (high brightness) in the original image. Therefore, a marker image having a deep pattern is used in the daytime stage (see
On the other hand, since the brightness of the entire original image is low in the night stage, the marker image stands out as compared with the daytime stage when the marker image having a deep pattern (high brightness) is embedded in the original image. Moreover, since the frequency band of the original image is shifted to the low-frequency side, an appropriate position detection process can be implemented even if the marker image does not have high brightness. Therefore, a marker image having a light pattern is used in the night stage (see
Although
The marker image may be changed depending on the surrounding environment of the display section 190. In
For example, when the disturbance measurement sensor 60 has detected that the time zone is daytime, and the room is bright, a marker image having a deep pattern (high brightness) is embedded in the original image. Specifically, when the room is bright, the marker image does not stand out even if the marker image has high brightness. Moreover, the position detection accuracy can be improved by increasing the brightness of the marker image based on the brightness of the room. Therefore, a marker image having high brightness is embedded in the original image.
When the disturbance measurement sensor 60 has detected that the time zone is night, and the room is dark, a marker image having a light pattern (low brightness) is embedded in the original image. Specifically, when the room is dark, the marker image stands out if the marker image has high brightness. Moreover, an appropriate position detection process can be implemented without increasing the brightness of the marker image to a large extent. Therefore, a marker image having low brightness is embedded in the original image.
According to the above method, since an optimum marker image is selected and embedded depending on the surrounding environment of the display section 190, an appropriate position detection process can be implemented.
5. Embedding of Marker Image Based on Given Condition
The marker image need not necessarily be always embedded. The marker image may be embedded (output) only when a given condition has been satisfied. Specifically, the original image in which the marker image is not embedded is output as the display image when a given condition has not been satisfied, and an image generated by embedding the marker image in the original image is output as the display image when a given condition has been satisfied.
In
Specifically, an image generated by embedding the marker image in the original image is displayed only when a given condition (i.e., the player has pulled the trigger 34 of the gun-type controller 30) has been satisfied. Therefore, since an image generated by embedding the marker image in the original image is displayed only at the timing of shooting, the marker image can be rendered inconspicuous so that the quality of the display image can be improved. Specifically, since the marker image is momentarily displayed only at the timing at which the player has pulled the trigger 34, the player does not easily become aware that the marker image is embedded.
Note that the frame in which the player has pulled the trigger 34 need not necessarily be the same as the frame in which an image generated by embedding the marker image in the original image is displayed. For example, an image generated by embedding the marker image in the original image may be displayed when several frames have elapsed after the frame in which the player has pulled the trigger 34. A given condition according to this embodiment is not limited to the condition whereby the player has pulled the trigger 34 (see
Alternatively, an image generated by embedding the marker image in the original image may be displayed when a given game event has occurred (i.e., a given condition has been satisfied). Examples of the given game event include a game story change event, a game stage change event, a character generation event, a target object lock-on event, an object contact event, and the like.
For example, the marker image for a shooting hit check is unnecessary before the target object (target) appears. In this case, only the original image is displayed. When a character (target object) has appeared (has been generated) (i.e., a given condition has been satisfied), an image generated by embedding the marker image in the original image is displayed so that the hit check process can be performed on the target object and the virtual bullet (shot). When the target object has disappeared from the screen, only the original image is displayed without embedding the marker image since the hit check process is unnecessary. Alternatively, only the original image may be displayed before the target object is locked on, and an image generated by embedding the marker image in the original image may be displayed when a target object lock-on event has occurred so that the hit check process can be performed on the virtual bullet and the target object.
As shown in
According to this configuration, since the player recognizes that the image generated by embedding the marker image in the position detection original image is an effect image, the marker image can be rendered more inconspicuous.
Note that the position detection original image is not limited to the image shown in
For example, an image generated by embedding the marker image in the original image may be necessarily displayed irrespective of whether or not a given condition has been satisfied, and the pointing position PP may be determined based on an image acquired from the display image when a given condition has been satisfied. For example, an image generated by embedding the marker image in the original image may be necessarily displayed on the display section 190 shown in
6. Process of Image Generation Device
A specific processing example of the image generation device 90 according to this embodiment is described below using a flowchart shown in
The image generation device 90 determines whether or not a frame ( 1/60th of a second) update timing has been reached (step S41). The image generation device 90 determines whether or not the player has pulled the trigger 34 of the gun-type controller 30 when the frame updating timing has been reached (step S42). When the player has pulled the trigger 34 of the gun-type controller 30, the image generation device 90 performs the marker image embedding process, as described with reference to
The image generation device 90 determines whether or not an impact position (pointing position) acquisition timing has been reached (step S44). When the impact position acquisition timing has been reached, the image generation device 90 acquires the impact position from the gun-type controller 30 (step S45). The image generation device 90 determines whether or not the reliability of the impact position is high (step S46). When the reliability of the impact position is high, the image generation device 90 employs the acquired impact position (step S47). When the reliability of the impact position is low, the image generation device 90 employs the preceding impact position stored in the storage section (step S48). The image generation device 90 performs the game process (e.g., hit check process and game result calculation process) based on the employed impact position (step S49).
The invention is not limited to the above embodiments. Various modifications may be made. Any term (e.g., gun-type controller or impact position) cited with a different term (e.g., pointing device or pointing position) having a broader meaning or the same meaning at least once in the specification and the drawings can be replaced by the different term in any place in the specification and the drawings.
The pointing position detection method, the marker image embedding method, the marker image change method, and the like are not limited to those described in connection with the above embodiments. Methods equivalent to the above methods are included within the scope of the invention. The invention may be applied to various games, and may also be used in applications other than a game. The invention may be applied to various image generation devices such as an arcade game system, a consumer game system, a large-scale attraction system in which a number of players participate, a simulator, a multimedia terminal, a system board that generates a game image, and a mobile phone.
Number | Date | Country | Kind |
---|---|---|---|
2008-093518 | Mar 2008 | JP | national |
This application is a continuation of International Patent Application No. PCT/JP2009/056487, having an international filing date of Mar. 30, 2009, which designated the United States, the entirety of which is incorporated herein by reference. Japanese Patent Application No. 2008-093518 filed on Mar. 31, 2008 is also incorporated herein by reference in its entirety.
Number | Date | Country | |
---|---|---|---|
Parent | PCT/JP2009/056487 | Mar 2009 | US |
Child | 12893424 | US |