The disclosure of Japanese Patent Application No. 2010-214218, filed on Sep. 24, 2010, is incorporated herein by reference.
1. Field of the Invention
The present invention relates to a computer-readable storage medium having a display control program stored therein, a display control apparatus, a display control system, and a display control method, and more particularly, relates to a computer-readable storage medium having a display control program stored therein, a display control apparatus, a display control system, and a display control method, for displaying an image obtained by taking an image of a virtual space, such that the image is superimposed on a real space and viewed by a user.
2. Description of the Background Art
Conventionally, a game apparatus (display control apparatus) is known which displays an image indicating a plurality of menu items (hereinafter, referred to as “menu image”), receives an operation of selecting one menu item from a user through an input device such as a touch panel or a cross key, and selects the one menu item from the plurality of menu items on the basis of the operation (e.g., see Japanese Laid-Open Patent Publication No. 2006-318393).
Further, an AR (Augmented Reality) technique is also known in which an image of the real world is taken with an imaging device such as a camera and an image of a virtual object can be displayed so as to be superimposed on the taken image of the real world. For example, in Japanese Laid-Open Patent Publication No. 2006-72667, when an image of the real world including a game card located in the real world (hereinafter, referred to as “real world image” in the present specification) is taken with an imaging device such as a camera, a game apparatus obtains a position and an orientation of the game card in the image of the real world. Then, the game apparatus calculates the relative positional relation between the imaging device and the game card on the basis of the obtained position and orientation, sets a virtual camera in a virtual space and locates an object on the basis of the calculation result, and generates an image of the object taken with the virtual camera. Then, the game apparatus generates and displays a superimposed image in which the generated image of the object is superimposed on the taken image of the real world (hereinafter, referred to as “augmented reality image” in the present specification). Note that an “augmented reality image” described in the present specification may include not only a superimposed image but also an image of an object that is superimposed on a real space and viewed by a user in an optical see-through technique.
Prior to performing a process for displaying an augmented reality image as shown in Japanese Laid-Open Patent Publication No. 2006-72667, it is necessary to display a menu image in some cases, in order for the user to select the performing of the process. Here, when a menu image is displayed by using the technique disclosed in Japanese Laid-Open Patent Publication No. 2006-318393, the game apparatus disclosed in Japanese Laid-Open Patent Publication No. 2006-72667 displays only a virtual image (an image generated by computer graphics) as a menu image without superimposing the virtual image on an image of the real world. Then, one menu item is selected by the user from among menu items indicated in the menu image, and the game apparatus displays an augmented reality image. As described above, the menu image is not an augmented reality image. Thus, when the display of the menu image is changed to the display of the augmented reality image, the user is made aware of the change of display, thereby impairing a feeling of the user being immersed in an augmented reality world (a world displayed by an augmented reality image). Further, when, while an augmented reality image is being displayed, the display is changed so as to display a menu image (virtual image) for the user to perform a menu operation, the same problem also arises.
Therefore, an object of the present invention is to provide a computer-readable storage medium having a display control program stored therein, a display control apparatus, a display control system, and a display control method which, for example, when a display of a menu image is changed to a display of an augmented reality image by selecting a menu item, prevent a user from strongly feeling the change.
The present invention has the following features to attain the object mentioned above.
(1) A computer-readable storage medium according to an aspect of the present invention has a display control program stored therein. The display control program is executed by a computer of a display control apparatus, which is connected to an imaging device and a display device that allows a real space to be viewed on a screen thereof. The display control program causes the computer to operate as taken image obtaining means, detection means, calculation means, virtual camera setting means, object location means, object image generation means, and display control means.
The taken image obtaining means obtains a taken image obtained by using the imaging device. The detection means detects a specific object from the taken image. The calculation means calculates a relative position of the imaging device and the specific object on the basis of a detection result of the specific object by the detection means. The virtual camera setting means sets a virtual camera in a virtual space on the basis of a calculation result by the calculation means. The object location means locates a selection object that corresponds to a menu item selectable by a user and is to be selected by the user, at a predetermined position in the virtual space that is based on a position of the specific object. The object image generation means takes an image of the virtual space with the virtual camera and generates an object image of the selection object. The display control means displays the object image on the display device such that the object image is superimposed on the real space on the screen and viewed by the user.
According to the above configuration, the image of the selection object that corresponds to the menu item selectable by the user and is to be selected by the user is displayed on the screen such that the image is superimposed on the real space and viewed by the user, whereby a menu image can be displayed as an augmented reality image in which the image of the virtual object is superimposed on the real space. Thus, for example, when a menu item is selected and a predetermined process of the selected menu item (hereinafter, referred to as “menu execution process”) is performed, even if an augmented reality image is displayed in the menu execution process, since the menu image is also an augmented reality image, the display of the menu image is changed to the display in the menu execution process without making the user strongly feel the change. Examples of the computer-readable storage medium includes, but are not limited to, volatile memories such as RAM and nonvolatile memories such as CD-ROM, DVD, ROM, a flash memory, and a memory card.
(2) In another configuration example, the display control program may further cause the computer to operate as selection determination means and activation means. The selection fixing means fixes selection of the selection object in accordance with an operation of the user. The activation means activates a predetermined process (menu execution process) of a menu item corresponding to the fixed selection object when the selection of the selection object is fixed by the selection fixing means.
According to the above configuration, a menu item is selected in a menu image displayed as an augmented reality image, whereby a menu execution process of the selected menu item is activated and performed.
(3) In another configuration example, in the computer-readable storage medium, the predetermined process includes a process based on the detection result of the specific object by the detection means. According to this configuration, the menu execution process includes the process based on the detection result of the specific object by the detection means (namely, a process of displaying an augmented reality image). Since a menu image and an image displayed in the menu execution process are augmented reality images as described above, a display of the menu image is changed to a display in the menu execution process without making the user strongly feel the change.
(4) In still another configuration example, the display control program may further cause the computer to operate as reception means. The reception means receives an instruction to redisplay the selection object from the user during a period when the predetermined process (menu execution process) is performed. When the instruction to redisplay the selection object is received by the reception means, the object location means may locate the selection object again. According to this configuration, the instruction to redisplay the selection object can be received from the user even during the period when the menu execution process is performed, and when the instruction is received, the selection object can be displayed again and the menu image can be displayed. Thus, when the user merely inputs the instruction to redisplay the selection object, the display in the menu execution process is changed to a display of the menu image. Therefore, change from the display of the menu image to the display in the menu execution process and change from the display in the menu execution process to the display of the menu image can be successively performed. The present invention includes a configuration in which the display is blacked out (a display of the screen in black) or another image is displayed in a short time at the change.
(5) In still another configuration example, the activation means may activate an application as the predetermined process.
(6) In still another configuration example, the display control program may further cause the computer to operate as selection means. The selection means selects the selection object in accordance with a movement of either one of the display control apparatus or the imaging device. Thus, the user is not required to perform a troublesome operation such as an operation of an operation button, and can select the selection object by a simple operation of only moving the display control apparatus.
(7) In still another configuration example, the selection means may select the selection object when the selection object is located on a sight line of the virtual camera that is set by the virtual camera setting means or on a predetermined straight line parallel to the sight line. In general, when moving the imaging device while taking an image with the imaging device, the user moves the own sight line in accordance with the movement. According to this configuration, when moving the imaging device, the user's sight line moves, and thus the sight line of the virtual camera also changes. Then, when the selection object is located on the sight line of the virtual camera or on the straight line parallel to the sight line, the selection object is selected. Therefore, the user can obtain a feeling as if selecting the selection object by moving the own sight line.
(8) In still another configuration example, the display control program may further cause the computer to operate as cursor display means. The cursor display means displays a cursor image at a predetermined position in a display area in which the object image is displayed. Thus, the user can know the direction of the sight line of the virtual camera and the direction of the straight line by the displayed position of the cursor image, and can easily select the selection object.
(9) In still another configuration example, the display control program may further cause the computer to operate as selection means and processing means. The selection means selects the selection object in accordance with a specific movement of either one of the display control apparatus or the imaging device. The processing means progresses the predetermined process activated by the activation means, in accordance with the specific movement of either one of the display control apparatus or the imaging device. According to this configuration, since the operation for selecting the selection object and the operation of the user in the menu execution process are the same, a menu image in which the operation for selecting the selection object is performed can be displayed as a tutorial image for the user to practice for the operation in the menu execution process.
(10) In still another configuration example, the display control program may further cause the computer to operate as selection means, determination means, and warning display means. The selection means selects the selection object in accordance with an inclination of either one of the display control apparatus or the imaging device. The determination means determines whether or not a distance between the specific object and the imaging device is equal to or less than a predetermined distance. The warning display means displays a warning on the display device when it is determined that the distance between the specific object and the imaging device is equal to or less than the predetermined distance. The predetermined distance is set to such a distance that, by tilting either one of the display control apparatus or the imaging device to such an extent as to be able to select the selection object, the specific object is not included in the taken image.
The above configuration makes it possible to warn the user that when an operation for selecting a selection object is performed, the selection object will not be displayed. Thus, it is possible to prevent the user from spending time and effort in adjusting the specific object that is not included in the taken image, such that the specific object is located in the imaging range of the imaging device.
(11) A display control apparatus according to an aspect of the present invention is connected to an imaging device and a display device that allows a real space to be viewed on a screen thereof, and comprises taken image obtaining means, detection means, calculation means, virtual camera setting means, object location means, object image generation means, and display control means. The taken image obtaining means obtains a taken image obtained by using the imaging device. The detection means detects a specific object from the taken image. The calculation means calculates a relative position of the imaging device and the specific object on the basis of a detection result of the specific object by the detection means. The virtual camera setting means sets a virtual camera in a virtual space on the basis of a calculation result by the calculation means. The object location means locates a selection object that corresponds to a menu item selectable by a user and is to be selected by the user, at a predetermined position in the virtual space that is based on a position of the specific object. The object image generation means takes an image of the virtual space with the virtual camera and generates an object image of the selection object. The display control means displays the object image on the display device such that the object image is superimposed on the real space on the screen and viewed by the user.
(12) A display control system according to an aspect of the present invention is connected to an imaging device and a display device that allows a real space to be viewed on a screen thereof, and comprises taken image obtaining means, detection means, calculation means, virtual camera setting means, object location means, object image generation means, and display control means. The taken image obtaining means obtains a taken image obtained by using the imaging device. The detection means detects a specific object from the taken image. The calculation means calculates a relative position of the imaging device and the specific object on the basis of a detection result of the specific object by the detection means. The virtual camera setting means sets a virtual camera in a virtual space on the basis of a calculation result by the calculation means. The object location means locates a selection object that corresponds to a menu item selectable by a user and is to be selected by the user, at a predetermined position in the virtual space that is based on a position of the specific object. The object image generation means takes an image of the virtual space with the virtual camera and generates an object image of the selection object. The display control means displays the object image on the display device such that the object image is superimposed on the real space on the screen and viewed by the user.
(13) A display control method according to an aspect of the present invention is a display control method for taking an image of a real world by using an imaging device and displaying an image of a virtual object in a virtual space by using a display device that allows a real space to be viewed on a screen thereof, and comprises a taken image obtaining step, a detection step, a virtual camera setting step, an object location step, an object image generation step, and a display control step.
The taken image obtaining step obtains a taken image obtained by using the imaging device. The detection step detects a specific object from the taken image. The calculation step calculates a relative position of the imaging device and the specific object on the basis of a detection result of the specific object at the detection step. The virtual camera setting step sets a virtual camera in a virtual space on the basis of a calculation result by the calculation step. The object location step locates a selection object that corresponds to a menu item selectable by a user and is to be selected by the user, as the virtual object at a predetermined position in the virtual space that is based on a position of the specific object. The object image generation step takes an image of the virtual space with the virtual camera and generates an object image of the selection object. The display control step displays the object image on the display device such that the object image is superimposed on the real space on the screen and viewed by the user.
(14) A display control system according to an aspect of the present invention comprises a marker and a display control apparatus connected to an imaging device and a display device that allows a real space to be viewed on a screen thereof. The display control apparatus comprises taken image obtaining means, detection means, calculation means, virtual camera setting means, object location means, object image generation means, and display control means. The taken image obtaining means obtains a taken image obtained by using the imaging device. The detection means detects the marker from the taken image. The calculation means calculates a relative position of the imaging device and the marker on the basis of a detection result of the marker by the detection means. The virtual camera setting means sets a virtual camera in a virtual space on the basis of a calculation result by the calculation means. The object location means locates a selection object that corresponds to a menu item selectable by a user and is to be selected by the user, at a predetermined position in the virtual space that is based on a position of the marker. The object image generation means takes an image of the virtual space with the virtual camera and generates an object image of the selection object. The display control means displays the object image on the display device such that the object image is superimposed on the real space on the screen and viewed by the user.
The display control apparatus, the system, and the display control method in the above (11) to (14) provide the same advantageous effects as those provided by the display control program in the above (1).
According to each of the aspects, a selection object that indicates a menu item selectable by the user and is to be selected by the user can be displayed as an augmented reality image.
As described above, the menu image is an augmented reality image. Thus, when a menu item is selected in the menu image by the user and a menu execution process of the selected menu item is performed, even if the menu execution process includes a process for displaying an augmented reality image (namely, a process based on the detection result of the specific object by the detection means), the user is not made to strongly feel change from the display of the menu image to a display of the subsequent augmented reality image.
Further, a menu item selectable by the user is displayed by displaying a selection object as a virtual object. Since the selectable menu item is indicated by the virtual object as described above, the user can obtain a feeling as if the selection object is present in the real world, and the menu item can be displayed without impairing a feeling of being immersed in an augmented reality world.
These and other objects, features, aspects and advantages of the present invention will become more apparent from the following detailed description of the present invention when taken in conjunction with the accompanying drawings.
(Structure of Game Apparatus)
Hereinafter, a game apparatus according to one embodiment of the present invention will be described.
Initially, an external structure of the game apparatus 10 will be described with reference to
As shown in
(Description of Lower Housing)
Initially, a structure of the lower housing 11 will be described. As shown in
As shown in
As shown in
The operation buttons 14A to 14L are each an input device for making a predetermined input. As shown in
The analog stick 15 is a device for indicating a direction, and is provided to the left of the lower LCD 12 in an upper portion of the inner side surface of the lower housing 11. As shown in
Four buttons, that is, the button 14B, the button 14C, the button 14D, and the button 14E, which are positioned so as to form a cross shape, are positioned such that a thumb of a right hand with which the lower housing 11 is held is naturally positioned on the positions of the four buttons. Further, the four buttons and the analog stick 15 sandwich the lower LCD 12, so as to be bilaterally symmetrical in position with respect to each other. Thus, depending on a game program, for example, a left-handed person can make a direction instruction input by using these four buttons.
Further, the microphone hole 18 is provided on the inner side surface of the lower housing 11. Under the microphone hole 18, a microphone (see
As shown in
Further, as shown in
Further, as shown in
A rechargeable battery (not shown) acting as a power supply for the game apparatus 10 is accommodated in the lower housing 11, and the battery can be charged through a terminal provided on a side surface (for example, the upper side surface) of the lower housing 11.
(Description of Upper Housing)
Next, a structure of the upper housing 21 will be described. As shown in
As shown in
The screen of the upper LCD 22 is provided on the inner side surface (main surface) 21B of the upper housing 21, and the screen of the upper LCD 22 is exposed at an opening of the upper housing 21. Further, as shown in
The upper LCD 22 is a display device capable of displaying a stereoscopically visible image. Further, in the present embodiment, an image for a left eye and an image for a right eye are displayed by using substantially the same display area. Specifically, the upper LCD 22 may be a display device using a method in which the image for a left eye and the image for a right eye are alternately displayed in the horizontal direction in predetermined units (for example, every other line). Alternatively, a display device using a method in which the image for a left eye and the image for a right eye are displayed alternately in a time division manner may be used. Further, in the present embodiment, the upper LCD 22 is a display device capable of displaying an image which is stereoscopically visible with naked eyes. A lenticular lens type display device or a parallax barrier type display device is used which enables the image for a left eye and the image for a right eye, which are alternately displayed in the horizontal direction, to be separately viewed by the left eye and the right eye, respectively. In the present embodiment, the upper LCD 22 of a parallax barrier type is used. The upper LCD 22 displays, by using the image for a right eye and the image for a left eye, an image (a stereoscopic image) which is stereoscopically visible with naked eyes. That is, the upper LCD 22 allows a user to view the image for a left eye with her/his left eye, and the image for a right eye with her/his right eye by utilizing a parallax barrier, so that a stereoscopic image (a stereoscopically visible image) exerting a stereoscopic effect for a user can be displayed. Further, the upper LCD 22 may disable the parallax barrier. When the parallax barrier is disabled, an image can be displayed in a planar manner (it is possible to display a planar visible image which is different from a stereoscopically visible image as described above. Specifically, a display mode is used in which the same displayed image is viewed with a left eye and a right eye.). Thus, the upper LCD 22 is a display device capable of switching between a stereoscopic display mode for displaying a stereoscopically visible image and a planar display mode (for displaying a planar visible image) for displaying an image in a planar manner. The switching of the display mode is performed by the 3D adjustment switch 25 described below.
Two imaging sections (23a and 23b) provided on the outer side surface (the back surface reverse of the main surface on which the upper LCD 22 is provided) 21D of the upper housing 21 are generically referred to as the outer imaging section 23. The imaging directions of the outer imaging section (left) 23a and the outer imaging section (right) 23b are each the same as the outward normal direction of the outer side surface 21D. Further, these imaging sections are each designed so as to be positioned in a direction which is opposite to the normal direction of the display surface (inner side surface) of the upper LCD 22 by 180 degrees. Specifically, the imaging direction of the outer imaging section (left) 23a and the imaging direction of the outer imaging section (right) 23b are parallel to each other. The outer imaging section (left) 23a and the outer imaging section (right) 23b can be used as a stereo camera depending on a program executed by the game apparatus 10. Further, depending on a program, when any one of the two outer imaging sections (23a and 23b) is used alone, the outer imaging section 23 may be used as a non-stereo camera. Further, depending on a program, images taken by the two outer imaging sections (23a and 23b) may be combined with each other or may compensate for each other, thereby enabling imaging using an extended imaging range. In the present embodiment, the outer imaging section 23 is structured so as to include two imaging sections, that is, the outer imaging section (left) 23a and the outer imaging section (right) 23b. Each of the outer imaging section (left) 23a and the outer imaging section (right) 23b includes an imaging device, such as a CCD image sensor or a CMOS image sensor, having a common predetermined resolution, and a lens. The lens may have a zooming mechanism.
As indicated by dashed lines in
In the present embodiment, the outer imaging section (left) 23a and the outer imaging section (right) 23b are secured to the housing, and the imaging directions thereof cannot be changed.
Further, the outer imaging section (left) 23a and the outer imaging section (right) 23b are positioned to the left and to the right, respectively, of the upper LCD 22 (on the left side and the right side, respectively, of the upper housing 21) so as to be horizontally symmetrical with respect to the center of the upper LCD 22. Specifically, the outer imaging section (left) 23a and the outer imaging section (right) 23b are positioned so as to be symmetrical with respect to a line which divides the upper LCD 22 into two equal parts, that is, the left part and the right part. Further, the outer imaging section (left) 23a and the outer imaging section (right) 23b are positioned at positions which are reverse of positions above the upper edge of the screen of the upper LCD 22 and which are on the upper portion of the upper housing 21 in an opened state. Specifically, when the upper LCD 22 is projected on the outer side surface of the upper housing 21, the outer imaging section (left) 23a and the outer imaging section (right) 23b are positioned, on the outer side surface of the upper housing 21, at a position above the upper edge of the screen of the upper LCD 22 having been projected.
As described above, the two imaging sections (23a and 23b) of the outer imaging section 23 are positioned to the left and the right of the upper LCD 22 so as to be horizontally symmetrical with respect to the center of the upper LCD 22. Therefore, when a user views the upper LCD 22 from the front thereof, the imaging direction of the outer imaging section 23 can be the same as the direction of the sight line of the user. Further, the outer imaging section 23 is positioned at a position reverse of a position above the upper edge of the screen of the upper LCD 22. Therefore, the outer imaging section 23 and the upper LCD 22 do not interfere with each other inside the upper housing 21. Therefore, the upper housing 21 may have a reduced thickness as compared to a case where the outer imaging section 23 is positioned on a position reverse of a position of the screen of the upper LCD 22.
The inner imaging section 24 is positioned on the inner side surface (main surface) 21B of the upper housing 21, and acts as an imaging section which has an imaging direction which is the same direction as the inward normal direction of the inner side surface. The inner imaging section 24 includes an imaging device, such as a CCD image sensor and a CMOS image sensor, having a predetermined resolution, and a lens. The lens may have a zooming mechanism.
As shown in
As described above, the inner imaging section 24 is used for taking an image in the direction opposite to that of the outer imaging section 23. The inner imaging section 24 is positioned on the inner side surface of the upper housing 21 at a position reverse of the middle position between the left and the right imaging sections of the outer imaging section 23. Thus, when a user views the upper LCD 22 from the front thereof, the inner imaging section 24 can take an image of a face of the user from the front thereof. Further, the left and the right imaging sections of the outer imaging section 23 do not interfere with the inner imaging section 24 inside the upper housing 21, thereby enabling reduction of the thickness of the upper housing 21.
The 3D adjustment switch 25 is a slide switch, and is used for switching a display mode of the upper LCD 22 as described above. Further, the 3D adjustment switch 25 is used for adjusting the stereoscopic effect of a stereoscopically visible image (stereoscopic image) which is displayed on the upper LCD 22. As shown in
As shown in
The 3D indicator 26 indicates whether or not the upper LCD 22 is in the stereoscopic display mode. The 3D indicator 26 is implemented as a LED, and is lit up when the stereoscopic display mode of the upper LCD 22 is enabled. The 3D indicator 26 may be lit up only when the program processing for displaying a stereoscopically visible image is performed (namely, image processing in which an image for a left eye is different from an image for a right eye is performed in the case of the 3D adjustment switch being positioned between the first position and the second position) in a state where the upper LCD 22 is in the stereoscopic display mode. As shown in
Further, a speaker hole 21 E is provided on the inner side surface of the upper housing 21. A sound is outputted through the speaker hole 21E from a speaker 43 described below.
(Internal Configuration of Game Apparatus 10)
Next, an internal electrical configuration of the game apparatus 10 will be described with reference to
The information processing section 31 is information processing means which includes a CPU (Central Processing Unit) 311 for executing a predetermined program, a GPU (Graphics Processing Unit) 312 for performing image processing, and the like. By executing a program stored in a memory (for example, the external memory 44 connected to the external memory I/F 33 or the internal data storage memory 35) inside the game apparatus 10, the CPU 311 of the information processing section 31 performs a process corresponding to the program (e.g., a photographing process and an image display process described below). The program executed by the CPU 311 of the information processing section 31 may be acquired from another device through communication with the other device. The information processing section 31 further includes a VRAM (Video RAM) 313. The GPU 312 of the information processing section 31 generates an image in accordance with an instruction from the CPU 311 of the information processing section 31, and renders the image in the VRAM 313. The GPU 312 of the information processing section 31 outputs the image rendered in the VRAM 313, to the upper LCD 22 and/or the lower LCD 12, and the image is displayed on the upper LCD 22 and/or the lower LCD 12.
To the information processing section 31, the main memory 32, the external memory I/F 33, the external data storage memory I/F 34, and the internal data storage memory 35 are connected. The external memory I/F 33 is an interface for detachably connecting to the external memory 44. The external data storage memory I/F 34 is an interface for detachably connecting to the external data storage memory 45.
The main memory 32 is volatile storage means used as a work area and a buffer area for (the CPU 311 of) the information processing section 31. That is, the main memory 32 temporarily stores various types of data used for the process based on the above program, and temporarily stores a program acquired from the outside (the external memory 44, another device, or the like), for example. In the present embodiment, for example, a PSRAM (Pseudo-SRAM) is used as the main memory 32.
The external memory 44 is nonvolatile storage means for storing a program executed by the information processing section 31. The external memory 44 is implemented as, for example, a read-only semiconductor memory. When the external memory 44 is connected to the external memory I/F 33, the information processing section 31 can load a program stored in the external memory 44. A predetermined process is performed by the program loaded by the information processing section 31 being executed. The external data storage memory 45 is implemented as a non-volatile readable and writable memory (for example, a NAND flash memory), and is used for storing predetermined data. For example, images taken by the outer imaging section 23 and/or images taken by another device are stored in the external data storage memory 45. When the external data storage memory 45 is connected to the external data storage memory I/F 34, the information processing section 31 loads an image stored in the external data storage memory 45, and the image can be displayed on the upper LCD 22 and/or the lower LCD 12.
The internal data storage memory 35 is implemented as a non-volatile readable and writable memory (for example, a NAND flash memory), and is used for storing predetermined data. For example, data and/or programs downloaded through the wireless communication module 36 by wireless communication is stored in the internal data storage memory 35.
The wireless communication module 36 has a function of connecting to a wireless LAN by using a method based on, for example, IEEE 802.11.b/g standard. The local communication module 37 has a function of performing wireless communication with the same type of game apparatus in a predetermined communication method (for example, infrared communication). The wireless communication module 36 and the local communication module 37 are connected to the information processing section 31. The information processing section 31 can perform data transmission to and data reception from another device via the Internet by using the wireless communication module 36, and can perform data transmission to and data reception from the same type of another game apparatus by using the local communication module 37.
The acceleration sensor 39 is connected to the information processing section 31. The acceleration sensor 39 detects magnitudes of accelerations (linear accelerations) in the directions of the straight lines along the three axial (xyz axial) directions, respectively. The acceleration sensor 39 is provided inside the lower housing 11. In the acceleration sensor 39, as shown in
The RTC 38 and the power supply circuit 40 are connected to the information processing section 31. The RTC 38 counts time, and outputs the time to the information processing section 31. The information processing section 31 calculates a current time (date) based on the time counted by the RTC 38. The power supply circuit 40 controls power from the power supply (the rechargeable battery accommodated in the lower housing 11 as described above) of the game apparatus 10, and supplies power to each component of the game apparatus 10.
The I/F circuit 41 is connected to the information processing section 31. The microphone 42 and the speaker 43 are connected to the I/F circuit 41. Specifically, the speaker 43 is connected to the I/F circuit 41 through an amplifier which is not shown. The microphone 42 detects a voice from a user, and outputs a sound signal to the I/F circuit 41. The amplifier amplifies a sound signal outputted from the I/F circuit 41, and a sound is outputted from the speaker 43. The touch panel 13 is connected to the I/F circuit 41. The I/F circuit 41 includes a sound control circuit for controlling the microphone 42 and the speaker 43 (amplifier), and a touch panel control circuit for controlling the touch panel. The sound control circuit performs A/D conversion and D/A conversion on the sound signal, and converts the sound signal to a predetermined form of sound data, for example. The touch panel control circuit generates a predetermined form of touch position data based on a signal outputted from the touch panel 13, and outputs the touch position data to the information processing section 31. The touch position data represents a coordinate of a position, on an input surface of the touch panel 13, on which an input is made. The touch panel control circuit reads a signal outputted from the touch panel 13, and generates the touch position data every predetermined time. The information processing section 31 acquires the touch position data, to recognize a position on which an input is made on the touch panel 13.
The operation button 14 includes the operation buttons 14A to 14L described above, and is connected to the information processing section 31. Operation data representing an input state of each of the operation buttons 14A to 141 is outputted from the operation button 14 to the information processing section 31, and the input state indicates whether or not each of the operation buttons 14A to 141 has been pressed. The information processing section 31 acquires the operation data from the operation button 14 to perform a process in accordance with the input on the operation button 14.
The lower LCD 12 and the upper LCD 22 are connected to the information processing section 31. The lower LCD 12 and the upper LCD 22 each display an image in accordance with an instruction from (the GPU 312 of) the information processing section 31. In the present embodiment, the information processing section 31 causes the upper LCD 12 to display a stereoscopic image (stereoscopically visible image).
Specifically, the information processing section 31 is connected to an LCD controller (not shown) of the upper LCD 22, and causes the LCD controller to set the parallax barrier to ON or OFF. When the parallax barrier is set to ON in the upper LCD 22, an image for a right eye and an image for a left eye which are stored in the VRAM 313 of the information processing section 31 are outputted to the upper LCD 22. More specifically, the LCD controller alternately repeats reading of pixel data of the image for a right eye for one line in the vertical direction, and reading of pixel data of the image for a left eye for one line in the vertical direction, thereby reading, from the VRAM 313, the image for a right eye and the image for a left eye. Thus, an image to be displayed is divided into the images for a right eye and the images for a left eye each of which is a rectangle-shaped image having one line of pixels aligned in the vertical direction, and an image, in which the rectangle-shaped image for the left eye which is obtained through the division, and the rectangle-shaped image for the right eye which is obtained through the division are alternately aligned, is displayed on the screen of the upper LCD 22. A user views the images through the parallax barrier in the upper LCD 22, so that the image for the right eye is viewed by the user's right eye, and the image for the left eye is viewed by the user's left eye. Thus, the stereoscopically visible image is displayed on the screen of the upper LCD 22.
The outer imaging section 23 and the inner imaging section 24 are connected to the information processing section 31. The outer imaging section 23 and the inner imaging section 24 each take an image in accordance with an instruction from the information processing section 31, and output data of the taken image to the information processing section 31.
The 3D adjustment switch 25 is connected to the information processing section 31. The 3D adjustment switch 25 transmits, to the information processing section 31, an electrical signal in accordance with the position of the slider 25a.
The 3D indicator 26 is connected to the information processing section 31. The information processing section 31 controls whether or not the 3D indicator 26 is to be lit up. For example, the information processing section 31 lights up the 3D indicator 26 when the upper LCD 22 is in the stereoscopic display mode. The game apparatus 10 has the internal configuration as described above.
Hereinafter, an outline of the image display process that is a feature of the present embodiment will be described with reference to
Here, the game apparatus 10 does not always display the menu image described above, and displays the above menu image when taking, with the outer imaging section 23, an image of a marker (an example of a specific object of the present invention) located in the real world. In other words, when the marker is not included in both a left real world image taken with the outer imaging section (left) 23a and a right real world image taken with the outer imaging section (right) 23b, an augmented reality image is not displayed. Hereinafter, when a left real world image and a right real world image are not distinguished from each other, they are referred to merely as “real world image”, and when they are distinguished from each other, they are described as “left real world image” and “right real world image” as they are. Hereinafter, a display method of a selected object by using a marker will be described with reference to
Each selection object O1 is, for example, an object having a cube shape with a predetermined thickness, and corresponds to a menu item selectable by the user as described above (e.g., an application program). Note that, actually, an icon indicating an corresponding menu item (e.g., an icon indicating an application program corresponding to each selection object O1) is displayed on each selection object O1, but is omitted in
The selection objects O1 are displayed so as to have predetermined positional relations with the marker 60. In addition, the cursor object O2 consists of, for example, a cross-shaped plate-like polygon or the like, and is displayed so as to be located at the center of an augmented reality image in a stereoscopic view. In the present embodiment, the cursor object O2 is located in the virtual space for displaying a cursor. However, instead of this configuration, a two-dimensional image of a cursor may be synthesized with an augmented reality image so as to be displayed at the center of the augmented reality image in a stereoscopic view.
Hereinafter, a method for the game apparatus 10 to display the selection objects O1 such that the selection objects O1 have the predetermined positional relations with the marker 60 will be described. First, the game apparatus 10 obtains positions and orientations of the marker 60 in a left real world image and a right real world image by performing image processing such as known pattern matching, and calculates the relative position of each outer imaging section 23 and the marker 60 in the real world on the basis of the positions and the orientations of the marker 60. Then, the game apparatus 10 sets a position and an orientation of a left virtual camera in the virtual space on the basis of the calculated relative position of the outer imaging section (left) 23a and the marker 60 and with a predetermined point in the virtual space corresponding to the marker 60 being as a reference. Similarly, the game apparatus 10 sets a position and an orientation of a right virtual camera in the virtual space on the basis of the calculated relative position of the outer imaging section (right) 23b and the marker 60. Then, the game apparatus 10 locates the four selection objects O1 at positions previously set based on the predetermined point.
The above method for the game apparatus 10 to display the selection objects O1 such that the selection objects O1 have the predetermined positional relations with the marker 60 will be described more specifically with reference to
Then, the game apparatus 10 sets positions and directions of the virtual cameras in the marker coordinate system on the basis of the position and the orientation of the marker 60 in the real world image. Note that, due to a parallax between the outer imaging section (right) 23b and the outer imaging section (left) 23a, the position and the orientation of the marker 60 are different between two real world images taken with the outer imaging section (right) 23b and the outer imaging section (left) 23a. Thus, the game apparatus 10 sets the two virtual cameras, that is, the right virtual camera corresponding to the outer imaging section (right) 23b and the left virtual camera corresponding to the outer imaging section (left) 23a, and the virtual cameras are located at different positions.
Further, as described above, the game apparatus 10 locates the cursor object O2, which consists of the cross-shaped plate-like polygon, in the virtual space. The game apparatus 10 sets the located position of the cursor object O2 as follows. The game apparatus 10 sets the position of the cursor object O2 on a straight line L3 that passes through the midpoint P3 between the position P1 of the left virtual camera and the position P2 of the right virtual camera and that is parallel to the sight line L1 of the right virtual camera and the sight line L2 of the left virtual camera. Note that the cursor object O2 is located at a predetermined distance from the midpoint P3 so as to be perpendicular to the straight line L3.
An image of the virtual space generated as described above is taken with the virtual camera, and images of the selection objects O1 and an image of the cursor object O2 are generated. These images are synthesized with a real world image, and the resultant image is displayed as a menu image. Note that images of the objects O1 and O2 taken with the right virtual camera are synthesized with a right real world image and the resultant image is displayed as an image for a right eye, and images of the objects O1 and O2 taken with the left virtual camera synthesized with a right real world image and the resultant image is displayed as an image for a left eye.
Next, an operation of the game apparatus 10 selecting one selection object O1 on the basis of an operation of the user in the image display process will be described with reference to
As shown in.
Then, when the selection object O1 is caused to be in a selected state, the game apparatus 10 performs a process of changing a display form (e.g., shape, size, orientation, color, pattern, and the like) of the selection object O1 (hereinafter, referred to as “object form change process”). In the present embodiment, the game apparatus 10 performs a process of slightly increasing the height of the selection object O1 (by a predetermined value) and locating a shadow object O3, which consists of a plate-like polygon, below the selection object O1. By changing the display form of the selection object O1 in a selected state in this manner, the user is notified that the selection object O1 is in a selected state. Note that in the present embodiment, the shadow object O3 is located with respect to the selection object O1 in a selected state, but, the shadow object O3 may be initially located with respect to each selection object O1 regardless of whether or not the selection object O1 is in a selected state. In such a configuration, unless each selection object O1 is not in a selected, the shadow object O3 is hidden by the selection object O1 and not displayed, and when a selection object O1 is caused to be in a selected state, the selection object O1 is raised and the shadow object O3 is displayed.
The selection object O1 whose display form has changed is located so as to be raised from a bottom surface of the virtual space. In this case, the collision area C is set so as to extend to the bottom surface to contact the bottom surface. In
(Memory Map)
Hereinafter, programs and main data that are stored in the memory 32 when the image display process is performed will be described with reference to
The image display program 70 is a program for causing the game apparatus 10 to perform the image display process. The left real world image 71L is a real world image taken with the outer imaging section (left) 23a. The right real world image 71R is a real world image taken with the outer imaging section (right) 23b. The left view matrix 72L is used when rendering an object (the selection objects O1, the cursor object O2, or the like) that is viewed from the left virtual camera, and is a coordinate transformation matrix for transforming a coordinate represented in the marker coordinate system into a coordinate represented in a left virtual camera coordinate system. The right view matrix 72R is used when rendering an object (the selection objects O1, the cursor object O2, or the like) that is viewed from the right virtual camera, and is a coordinate transformation matrix for transforming a coordinate represented in the marker coordinate system into a coordinate represented in a right virtual camera coordinate system.
The selection object information 73 is information on a selection object O1, and includes model information representing the shape and pattern of the selection object O1, information indicating a position in the marker coordinate system, and the like. The selection object information 73 is stored for each of the selection objects O1 (O1a, O1b, . . . , O1n). The cursor object information 74 is information on the cursor object O2, and includes model information representing the shape and color of the cursor object O2, information indicating the current position and the distance from the midpoint P3, and the like. The selection information 75 is information for identifying a selection object O1 in a selected state, among the four selection objects O1. The collision information 76 is information on each collision area C, and indicates a set range of the collision area C based on the position of the selection object O1 (e.g., the position of its representative point). The collision information 76 is used for generating the collision areas C1 to C4. The menu item information 77 is information indicating a menu item corresponding to each selection object O1. The shadow object information 78 is information on a shadow object O3, and includes model information representing the shape and color of the shadow object O3 and information indicating a position based on the position of the selection object O1 (e.g., the position of its representative point).
The left real world image 71L, the right real world image 71R, the left view matrix 72L, the right view matrix 72R, and the selection information 75 are data that are generated by execution of the image display program and temporarily stored in the memory 32. The selection object information 73, the cursor object information 74, the collision information 76, the menu item information 77, and the shadow object information 78 are data that are previously stored in the internal data storage memory 35, the external memory 44, the external data storage memory 45, or the like, and are read out by execution of an image processing program and stored in the memory 32. Although not shown, information on a virtual object, selection information, and collision information that are used in the menu execution process are stored as information in a format that is the same as those of the selection object information 73, the selection information 75, and the collision information 76. These pieces of information are also previously stored in the internal data storage memory 35, the external memory 44, the external data storage memory 45, or the like, and are read out by execution of the image processing program and stored in the memory 32.
(Image Display Process)
Hereinafter, the image display process performed by the CPU 311 will be described in detail with reference to
First, the CPU 311 obtains a left real world image 71L and a right real world image 71R from the memory 32 (S10). Then, the CPU 311 performs a marker recognition process on the basis of the obtained left real world image 71L and right real world image 71R (S11).
As described above, in the upper housing 21, the outer imaging section (left) 23a and the outer imaging section (right) 23b are spaced apart from each other at a certain interval (e.g., 3.5 cm). Thus, when images of the marker 60 are simultaneously taken with the outer imaging section (left) 23a and the outer imaging section (right) 23b, the position and the orientation of the marker 60 in a left real world image taken with the outer imaging section (left) 23a are different from the position and the orientation of the marker 60 in a right real world image taken with the outer imaging section (right) 23b, due to the parallax. In the present embodiment, the CPU 311 performs the marker recognition process on both the left real world image and the right real world image.
For example, when performing the marker recognition process on the left real world image, the CPU 311 determines whether or not the marker 60 is included in the left real world image, by using pattern matching or the like. When the marker 60 is included in the left real world image, the CPU 311 calculates a left view matrix 72L on the basis of the position and the orientation of the marker 60 in the left real world image. The left view matrix 72L is a matrix in which a position and orientation of the left virtual camera that are calculated on the basis of the position and the orientation of the marker 60 in the left real world image are reflected. More precisely, the left view matrix 72L is a coordinate transformation matrix for transforming a coordinate represented in the marker coordinate system in the virtual space as shown in
Further, for example, when performing the marker recognition process on the right real world image, the CPU 311 determines whether or not the marker 60 is included in the right real world image, by using pattern matching or the like. When the marker 60 is included in the right real world image, the CPU 311 calculates a right view matrix 72R on the basis of the position and the orientation of the marker 60 in the right real world image. The right view matrix 72R is a matrix in which a position and orientation of the right virtual camera that are calculated on the basis of the position and the orientation of the marker 60 in the right real world image are reflected. More precisely, the right view matrix 72R is a coordinate transformation matrix for transforming a coordinate represented in the marker coordinate system in the virtual space as shown in
In calculating the view matrixes 72L and 72R, the CPU 311 calculates the relative position of the marker 60 and each outer imaging section 23. Then, as described above with reference to
Next, the CPU 311 performs a process of calculating a position of the cursor object O2 and locating the cursor object O2 in the virtual space (S12). Specifically, the CPU 311 calculates a straight line L3 as shown in
Then, the CPU 311 performs a collision determination process (S13). Specifically, in the collision determination process, the CPU 311 reads out the collision information 76 from the memory 32, and calculates a collision area C for each selection object O1 on the basis of the collision area indicated by the collision information 76. The collision area C is also represented in the marker coordinate system. Here, the collision area C is calculated on the basis of a position of the selection object O1 that is set in processing at the last frame, and is set so as to surround five sides of the selection object O1 except its bottom, as described above with reference to
Subsequently, the CPU 311 determines a selection object O1 to be in a selected state (S14). Specifically, when determining that any of the collision areas C collides with the straight line L3 calculated at step S12, the CPU 311 determines a selection object O1 corresponding to the colliding collision area C as a selection object O1 to be in a selected state. In other words, the CPU 311 stores, in the memory 32, selection information 75 indicating the colliding selection object O1. When selection information 75 has been already stored, the CPU 311 updates the selection information 75. On the other hand, when determining that no collision area C intersects the straight line L3 calculated at step S12, the CPU 311 deletes the selection information 75 stored in the memory 32 (or stores a NULL value), in order to provide a state where no selection object O1 is selected.
Subsequently, the CPU 311 performs the object form change process described above (S15). Specifically, the CPU 311 changes the position of the selection object O1 in a selected state (namely, the selection object O1 indicated by the selection information 75) (updates the position indicated by the selection object information 73) such that the selection object O1 is raised to a position higher than an initial position by a predetermined height. In addition, on the basis of the shadow object information 78, the CPU 311 locates the shadow object O3 at a position based on the position of the selection object O1 in a selected state (below the position of the selection object O1).
With respect to the selection objects O1 that are not in a selected state, the CPU 311 sets an initial position, and does not locates the shadow objects O1.
Here, in the present embodiment, the CPU 311 changes the display form of the selection object O1 by changing the height of the selection object O1. However, the CPU 311 may change the display form of the selection object O1 by changing the orientation of the selection object O1 (e.g., displaying an animation indicating that the selection object O1 stands up), shaking the selection object O1, or the like. When the change of the display form of the selection object O1 is set to have a natural content similar to change of a display form of a real object in the real world as described above, a feeling of the user being immersed in an augmented reality world can be enhanced.
Next, the CPU 311 renders the left real world image 71L and the right real world image 71R in corresponding areas, respectively, of the VRAM 313 by using the GPU 312 (S16). Subsequently, the CPU 311 renders the selection objects O1 and the cursor object O2 such that the selection objects O1 and the cursor object O2 are superimposed on the real world images 71L and 71R in the VRAM 313, by using the GPU 312 (S17). Specifically, the CPU 311 performs viewing transformation on coordinates of the selection objects O1 and the cursor object O2 in the marker coordinate system into coordinates in the left virtual camera coordinate system, by using the left view matrix 72L calculated at step S11. Then, the CPU 311 performs a predetermined rendering process on the basis of the coordinates obtained by the transformation, and renders the selection objects O1 and the cursor object O2 on the left real world image 71L by using the GPU 312, to generate a superimposed image (an image for a left eye). Similarly, the CPU 311 performs viewing transformation on the coordinates of the objects O1 and O2 by using the right view matrix 72R. Then, the CPU 311 performs a predetermined rendering process on the basis of the coordinates obtained by the transformation, and renders the selection objects O1 and the cursor object O2 on the right real world image 71R by using the GPU 312, to generate a superimposed image (an image for a right eye). Thus, for example, an image as shown in
Next, the CPU 311 calculates the distance from the outer imaging section 23 (specifically, the central point of a line connecting the outer imaging section (left) 23a to the outer imaging section (right) 23b) to the marker 60 (S18). This distance can be calculated on the basis of the position and the orientation of the marker 60 included in the left real world image 71L and the position and the orientation of the marker 60 included in the right real world image 71R. This distance may be calculated on the basis of the size of the marker 60 in the left real world image 71L and/or the right real world image 71R. Still alternatively, instead of the distance between the outer imaging section 23 and the marker 60 in the real world, the distance between the virtual camera in the virtual space and the origin of the marker coordinate system may be calculated.
Then, the CPU 311 determines whether or not the distance calculated at step S18 is equal to or less than a predetermined value (S19). The smaller the distance between the outer imaging section 23 and the marker 60 is, the larger the marker 60 and the selection objects O1 displayed on the upper LCD 22 are. When the distance between the outer imaging section 23 and the marker 60 is equal to or less than a certain value, if the user tilts the game apparatus 10 (the outer imaging section 23) in order to select a desired selection object O1, a part of the marker 60 is moved out of the imaging range of the outer imaging section 23 before the selection object O1 is caused to be in a selected state, and an augmented reality image cannot be generated. Therefore, at step S19, it is determined whether or not the distance between the outer imaging section 23 and the marker 60 is too small. The distance between the outer imaging section 23 and the marker 60 which distance provides the problem described above depends on the number, the sizes, and the positions of the selection objects O1 located in the virtual space. Thus, when the number, the sizes, and the positions of the selection objects O1 are variable, the predetermined value used at step S19 may be set so as to be variable in accordance with them. By so setting, in accordance with the number, the sizes, and the positions of the selection objects O1, display of a warning at step S20 described below can be performed only when necessary.
When determining that the distance calculated at step S18 is equal to or less than the predetermined value (YES at S19), the CPU 311 instructs the GPU 312 to render a warning message on each superimposed image (the image for a left eye and the image for a right eye) in the VRAM 313 (S20). The warning message is, for example, a message for prompting the user to move away from the marker 60. Then, the CPU 311 advances the processing to step S22.
On the other hand, when determining that the distance calculated at step S18 is not equal to or less than the predetermined value (NO at S19), the CPU 311 instructs the GPU 312 to render a message indicating a method of selecting a selection object O1 (e.g., a message, “please locate a cursor at a desired selection object and press a predetermined button”) in an edge portion of each superimposed image (the image for a left eye and the image for a right eye) in the VRAM 313. Then, the CPU 311 advances the processing to step S22.
Next, the CPU 311 determines whether or not there is a selection object O1 in a selected state (S22). This determination is performed by referring to the selection information 75. When determining that there is no selection object O1 in a selected state (NO at S22), the CPU 311 returns the processing to step S10. On the other hand, when determining that there is a selection object O1 in a selected state (YES at S22), the CPU 311 determines whether or not a menu selection fixing instruction has been received from the user (S23). The menu selection fixing instruction is inputted, for example, by any of the operation buttons 14 being operated. When determining that the menu selection fixing instruction has not been received (NO at S23), the CPU 311 returns the processing to step S10. Steps S10 to S22, step S23 performed when it is determined as YES at step S22, and the process performed when it is determined as NO at step S23 are repeatedly performed in predetermined rendering cycles (e.g., 1/60 sec). In addition, steps S10 to S22 and the process performed when it is determined as NO at step S22 are repeatedly performed in predetermined rendering cycles (e.g., 1/60 sec).
On the other hand, when determining that the menu selection fixing instruction has been received (YES at S23), the CPU 311 determines that the selection of the selection object O1 is fixed, and performs a menu execution process corresponding to the selection object O1 (e.g., executes an application program corresponding to the selection object O1) on the basis of the menu item information 77 (S24). The menu execution process is repeatedly performed in predetermined rendering cycles (e.g., 1/60 sec).
Hereinafter, the menu execution process will be described with reference to
In the predetermined game process, the process for displaying an augmented reality image in which virtual objects are superimposed on the left real world image 71L and the right real world image 71R, and the process of selecting a virtual object may be performed as the same processes as those at steps S10 to S17. For example, the predetermined game process is a process for a shooting game as described below. Specifically, in the process for the shooting game, enemy objects as virtual objects are located at positions based on the marker 60 in the virtual space. Then, when any of collision areas C for the displayed enemy objects collides with the straight line L3 in the virtual space by the user tilting the game apparatus 10, the game apparatus 10 selects the colliding enemy object as a shooting target. When the CPU 311 performs such a game process, the selection objects O1 are displayed and the user is caused to select a selection object O1 prior to the game process, whereby the user can practice for an operation of selecting a virtual object in the predetermined game process. In other words, a menu image can be a tutorial image.
In the present embodiment, the game process is performed at step S241. However, any process other than the game process may be performed at step S241, as long as it is a process in which an augmented reality image is displayed.
Next, the CPU 311 determines whether or not the game has been cleared (S242). When determining that the game has been cleared (YES at S242), the CPU 311 ends the menu execution process and returns the processing to step S10 in
The image display process described above is a process in the case where a stereoscopic display mode is selected. However, in the present embodiment, the selection objects O1 can be displayed even in a planar display mode. Specifically, in the planar display mode, for example, only either one of the outer imaging section (left) 23a or the outer imaging section (right) 23b is activated. Either outer imaging section 23 may be activated, but in the present embodiment, only the outer imaging section (left) 23a is activated. In an image display process in the planar display mode, only a left view matrix 72L is calculated (a right view matrix 72R is not calculated), and the position of each selection object O1 in the marker coordinate system is transformed into a position in the virtual camera coordinate system by using the left view matrix 72L. The position of the cursor object O2 is set on the sight line of the left virtual camera. Then, a collision determination is performed for the collision areas C and the sight line of the left virtual camera instead of the straight line L3. In the other points, the image display process in the planar display mode is the same as the image display process in the stereoscopic display mode, and thus the description thereof is omitted.
As described above, in the present embodiment, the game apparatus 10 can display a menu image indicating menu items selectable by the user, by displaying an augmented reality image of the selection objects O1. Thus, even when a menu item is selected and a corresponding menu execution process is performed with an augmented reality image displayed, the display can be changed from the menu image to the augmented reality image in the menu execution process without making the user strongly feel the change.
Further, the user can select a selection object O1 by a simple operation of moving the game apparatus 10 (the outer imaging section 23), and even in a menu screen in which the selection objects O1 are displayed, selection of a menu item can be performed with improved operability and enhanced amusement.
Hereinafter, modifications of the embodiment described above will be described.
(1) In the embodiment described above, the position and the orientation of the right virtual camera are set on the basis of the position and the orientation of the left virtual camera that are calculated from the recognition result of the marker in the left real world image. In another embodiment, the position and the orientation of the left virtual camera and the position and the orientation of the right virtual camera may be set by considering either or both of: the position and the orientation of the left virtual camera that are calculated from the recognition result of the marker in the left real world image; and the position and the orientation of the right virtual camera that are calculated from the recognition result of the marker in the right real world image.
(2) In the embodiment described above, the selection objects O1 are located around the origin of the marker coordinate system. In another embodiment, the selection objects O1 may not be located at positions around the origin of the marker coordinate system. However, the case where the selection objects O1 are located at positions around the origin the marker coordinate system is preferred, since the marker 60 is unlikely to be out of the imaging range of the outer imaging section 23 even when the game apparatus 10 is tilted in order to cause a selection object O1 to be in a selected state.
(3) In the embodiment described above, a plurality of selection objects O1 is located in the virtual space. In another embodiment, only one selection object O1 may be located in the virtual space. As a matter of course, the shape of each selection object O1 is not limited to a cube shape. The shapes of the collision areas C and the cursor object O2 are also not limited to the shapes in the embodiment described above.
(4) In the embodiment described above, the relative position of each virtual camera and each selection object O1 is set on the basis of the positions and the orientations of the marker 60 in the real world images 71L and 71R, but may be set on the basis of the position and the orientation of another specific object other than the marker 60. The other specific object is, for example, a person's face, a hand, a bill, or the like, and may be any object as long as it is identifiable by pattern matching or the like.
(5) In the embodiment described above, the relative position and orientation of the virtual camera with respect to the selection object O1 are changed in accordance with change of the orientation and the position of the outer imaging section 23 by using the specific object that is the marker 60 or the like, but may be changed by another method. For example, the following method as disclosed in Japanese Patent Application No. 2010-127092 may be used. Specifically, at start of the image display process, the position of the virtual camera, the position of each selection object O1 in the virtual space, and the imaging direction of the virtual camera are set at previously-set default. Then, a moving amount (a change amount of the orientation) of the outer imaging section 23 from the start of the image display process is calculated by calculating the difference between a real world image at the last frame and a real world image at the current frame for each frame, and the imaging direction of the virtual camera is changed from the direction of the default in accordance with the moving amount. By so doing, the orientation of the virtual camera is changed in accordance with the change of the orientation of the outer imaging section 23 without using the marker 60, and the direction of the straight line L3 is changed accordingly, whereby it is possible to cause the straight line L3 to collide with a selection object O1.
(6) In the embodiment described above, the user selects a desired selection object O1 by moving the game apparatus 10 (namely, the outer imaging section 23) such that the straight line L3 intersects the desired selection object O1, but may select a selection object O1 by another method. For example, a movement of the game apparatus 10 may be detected by using an acceleration sensor, an angular velocity sensor, or the like, and a desired selection object O1 may be selected in accordance with the movement. Alternatively, for example, a desired selection object O1 may be selected by using a pointing device such as a touch panel.
(7) In the embodiment described above, the outer imaging section 23 is previously mounted to the game apparatus 10. In another embodiment, an external camera detachable from the game apparatus 10 may be used.
(8) In the embodiment described above, the upper LCD 22 is previously mounted to the game apparatus 10. In another embodiment, an external stereoscopic display detachable from the game apparatus 10 may be used.
(9) In the embodiment described above, the upper LCD 22 is a stereoscopic display device using a parallax barrier method. In another embodiment, the upper LCD 22 may be a stereoscopic display device using any other method such as a lenticular lens method. For example, in the case of using a stereoscopic display device using a lenticular lens method, the CPU 311 or another processor may synthesize an image for a left eye and an image for a right eye, and the synthesized image may be supplied to the stereoscopic display device using a lenticular lens method.
(10) In the embodiment described above, it is possible to switch between the stereoscopic display mode and the planar display mode. However, display may be performed only in either one of the modes.
(11) In the embodiment described above, virtual objects are synthesized with a real world image and displayed by using the game apparatus 10. In another embodiment, virtual objects may be synthesized with a real world image and displayed by using any information processing apparatus or information processing system (e.g., a PDA (Personal Digital Assistant), a mobile phone, a personal computer, or a camera).
(12) In the embodiment described above, the image display process is performed by using only one information processing apparatus (the game apparatus 10). In another embodiment, a plurality of information processing apparatuses, included in an image display system, which are communicable with each other may share the performing of the image display process.
(13) In the embodiment described above, a video see-through technique has been described in which a camera image taken with the outer imaging section 23 and images of virtual objects (the selection objects O1 and the like) are superimposed on each other and displayed on the upper LCD 22. However, the present invention is not limited thereto. For example, an optical see-through technique may be implemented. In this case, at least a head mounted display equipped with a camera is used, and the user can view the real space through a display part corresponding to a lens part of eye glasses. The display part is formed from a material that allows the user to view the real space therethrough. In addition, the display part includes a liquid crystal display device or the like, and is configured to display an image of a virtual object generated by a computer, on the liquid crystal display device or the like and reflect light from the liquid crystal display device by a half mirror or the like such that the light is guided to the user's retina. Thus, the user can view an image in which the image of the virtual object is superimposed on the real space. The camera included in the head mounted display is used for detecting a marker located in the real space, and an image of a virtual object is generated on the basis of the detection result. Further, as another optical see-through technique, there is a technique in which a half mirror is not used and a transmissive liquid crystal display device is laminated on the display part. The present invention may use this technique. In this case, when an image of a virtual object is displayed on the transmissive liquid crystal display device, the image of the virtual object displayed on the transmissive liquid crystal display device is superimposed on the real space viewed through the display part, and the image of the virtual object and the real space are viewed by the user.
While the invention has been described in detail, the foregoing description is in all aspects illustrative and not restrictive. It will be understood that numerous other modifications and variations can be devised without departing from the scope of the invention.
Number | Date | Country | Kind |
---|---|---|---|
2010-214218 | Sep 2010 | JP | national |