The present invention relates to a stereoscopic image display apparatus, a method, and a computer program product for generating a stereoscopic image that is movable in conjunction with a real object.
Various methods are known that can be used by stereoscopic-view image display apparatuses that are operable to display moving pictures (i.e., so-called three-dimensional display devices). In recent years, there has been an increasing demand for flat-panel type display apparatuses that do not require special eye-glasses or the like. A method that can be relatively easily realized is to place a light beam controlling element in front of a display panel in which the positions of the pixels are fixed, such as a direct-view-type liquid crystal display device, a projection-type liquid crystal display device, or a plasma display device, the light beam controlling element being configured so as to control light beams from the display panel so that the light beams are directed toward the viewer.
Such a light beam controlling element is generally known as a parallax barrier and controls the light beams so that, in a given position on the light beam controlling element, mutually different images are viewed depending on the angle. More specifically, when only a left-right parallax (i.e., a horizontal parallax) is to be applied, slits or a lenticular sheet (i.e., a cylindrical lens array) is used as the light beam controlling element. When an up-down parallax (i.e., a vertical parallax) is also to be applied, a pin-hole array or a lens array is used as the light beam controlling element.
Methods in which a parallax barrier is used can be further classified into a two-view method, a multi-view method, a super multi-view method (i.e., a multi-view method with a super multi-view condition), and an integral imaging method (hereinafter, “II method”). The basic principle used in these methods is substantially the same as the one that was invented about 100 years ago and has been used in stereoscopic photography.
Because a distance of sight is generally finite, a display image is generated so that a perspective projection image can be actually viewed at the distance of sight, regardless of whether the II method is used or the multi-view method is used. In the II method in which only a horizontal parallax is used and no vertical parallax is used, in a case where the pitch of the parallax barrier in the horizontal direction is an integer multiple of the pitch of the pixels in the horizontal direction, there is a set of parallel light beams. Thus, an image in which the vertical direction corresponds to a perspective projection for a certain distance of sight and the horizontal direction corresponds to a parallel projection is divided in units of rows of pixels. Then, the divided image is combined into a parallax combined image that is in an image format displayable on a display screen. As a result, a stereoscopic image that is properly projected is obtained (see, for example, Patent Documents 1 and 2). In particular, the II method is suitable for interactive purposes such as to perform an operation to directly point to the reproduced image (i.e., the three-dimensional image), because the light beams from a real object are reproduced.
With three-dimensional display devices that use such a light beam reproduction method for displaying a stereoscopic image by reproducing light beams, it is possible to reproduce a stereoscopic image having high quality by increasing the data of the reproduced light beams, for example, by increasing the number of viewpoints in the case where the multi-view method is used, and by increasing the number of light beams that are in mutually different directions on the base of the display screen, in the case where the II method is used.
When the II method is used, however, although the user is able to directly point to a three-dimensional image (i.e., an optical real image) that is reproduced to the front of a display panel (i.e., on the optical real-image side), the user is not able to directly point to a three-dimensional image (i.e., an optical virtual image) that is reproduced behind the display panel (i.e., on the optical virtual-image side), because the display panel is physically in the way between the three-dimensional image and the user.
In view of the problem described above, it is an object of the present invention to provide a stereoscopic image display apparatus, a stereoscopic image displaying method, and a stereoscopic image displaying computer program with which it is possible to improve operability for three-dimensional images in the stereoscopic image display apparatus that uses the integral imaging method or the light beam reproduction method.
To solve the problem described above and achieve the object, the present invention provides a stereoscopic image display apparatus that displays a three-dimensional image by using an integral imaging method or a light beam reproduction method, and includes a position detecting unit that detects a position and an orientation direction of a handheld device positioned inside or near a display space provided over a three-dimensional display screen; a calculation processing unit that performs a calculation process for displaying the three-dimensional image in a position that is successive or close to the handheld device, based on the position and the orientation direction of the handheld device; and a display controlling unit that causes the three-dimensional image to be displayed as a conjunctive three-dimensional image in the position that is successive or close to the handheld device, based on a result of the calculation process performed by the calculation processing unit.
Further, the present invention provides a stereoscopic image displaying method used by a stereoscopic image display apparatus that displays a three-dimensional image by using an integral imaging method or a light beam reproduction method. The stereoscopic image display method includes detecting a position and an orientation direction of a handheld device that is positioned inside or near a display space provided over a three-dimensional display screen; performing a calculation process for displaying the three-dimensional image in a position that is successive or close to the handheld device, based on the position and the orientation direction of the handheld device; and causing the three-dimensional image to be displayed as a conjunctive three-dimensional image in the position that is successive or close to the handheld device, based on a result of the calculation process performed at the calculation processing step.
Furthermore, the present invention provides a computer program product having a computer readable medium including programmed instructions for displaying a three-dimensional image by using an integral imaging method or a light beam reproduction method, wherein the instructions, when executed by a computer, cause the computer to perform: detecting a position and an orientation direction of a handheld device positioned inside or near a display space provided over a three-dimensional display screen and held by a user; performing a calculation process for displaying the three-dimensional image in a position that is successive or close to the handheld device, based on the position and the orientation direction of the handheld device; and causing the three-dimensional image to be displayed as a conjunctive three-dimensional image in the position that is successive or close to the handheld device, based on a result of the calculation process performed by the step of calculation processing.
Exemplary embodiments of a stereoscopic image display apparatus, a stereoscopic image displaying method, and a stereoscopic image displaying computer program will be explained, with reference to the accompanying drawings.
The CPU 1 controls the constituent elements by performing various types of computational processes according to the stereoscopic image displaying program. Characteristic processes according to the first embodiment that are performed by the CPU 1 based on the stereoscopic image displaying program will be explained below.
Next, the stereoscopic image displaying unit 5 will be explained. As shown in
The image displaying element 51 may be selected out of various types of display devices such as a direct-view-type liquid crystal display device, a projection-type liquid crystal display device, a plasma display device, a field emission display device, and an organic Electro Luminescence (EL) display device, as long as the pixels of which the positions are fixed are two-dimensionally arranged in a matrix structure in the display screen.
As the light beam controlling element 52, a lenticular lens array that extends in a substantially vertical direction and has a cyclic structure in a substantially horizontal direction is used. In this situation, there is a parallax only in the horizontal direction x, and the image changes depending on the distance of sight. However, because there is no parallax in the vertical direction y, the same image is viewed regardless of the viewing position. In
In the display screen of the image displaying element 51 included in the stereoscopic image displaying unit 5 according to the first embodiment, subpixels corresponding to the colors of red (R), green (G), and blue (B) are arranged in an array formation. The subpixels corresponding to the colors of R, G, and B are realized by placing color filters on the display screen in an appropriate manner.
Shown in
The image that is output to the stereoscopic image displaying unit 5 cannot be perceived as a normal image when being viewed without the light beam controlling element 52, because the parallax images are interleaved. Thus, the image is not suitable for applying an image compression thereto, such as Joint Photographic Experts Group (JPEG) or Moving Picture Experts Group (MPEG). Thus, the image storing unit (i.e., the HDD 4) stores therein an image in which the parallax images are arranged in an array formation and that has been compressed in advance. When the three-dimensional image is reproduced (displayed), the three-dimensional image rendering unit 12 decodes the image read from the HDD 4 so as to reconstruct the image and also performs an interleaving conversion process so as to convert the image into an image in a format that can be output to the stereoscopic image displaying unit 5. The three-dimensional image rendering unit 12 is also able to change the size of the image by enlarging or reducing it, before performing the interleaving conversion process on the decoded image. The reason is that, even if the size of the image is changed, the three-dimensional image rendering unit 12 is able to perform the interleaving conversion process properly.
Returning to the description of
The method for detecting the position and the orientation direction of the handheld device 8 may be selected out of various types of methods. According to the first embodiment, the position and the orientation direction of the handheld device 8 is detected by using a method described below. Next, the method for detecting the position and the orientation direction of the handheld device 8 will be explained, with reference to
As the point-like light emitting members 81 and 82 provided in the handheld device 8, for example, infrared light emitting diodes may be used. Each of the point-like light emitting members 81 and 82 does not necessarily have to be a point light source in a strict sense. It is acceptable even if the light source has a certain dimension. In addition, an arrangement is preferable in which the color of the emitted light beam, the size of the light beam point, and the light emission conditions such as the time interval between light emissions is different between the point-like light emitting member 81 and the point-like light emitting member 82, so that it is possible to identify from which one of the point-like light emitting members 81 and 82, each light beam has been emitted.
The real-object position detecting unit 11 derives the position and the orientation direction of the handheld device 8, based on the photographed pictures of the emitted light beams that are included in the photographed images photographed by the stereo camera 71 and the stereo camera 72. More specifically, the real-object position detecting unit 11 detects the positions of the point-like light emitting members 81 and 82 based on the positions of the light beam points that have been recorded as the photographed pictures and a predetermined positional relationship between the stereo camera 71 and the stereo camera 72, by using the principle of triangulation.
When the coordinates of the positions of the point-like light emitting members 81 and 82 are obtained, it is possible to easily calculate a vector from the point-like light emitting member 81 to the point-like light emitting member 82. In other words, as shown in
In the first embodiment, the method for detecting the position and the orientation direction that has been explained with reference to
Returning to the description of
More specifically, the three-dimensional image rendering unit 12 renders the three-dimensional image 30 that is positioned at a point (e.g., (X1, Y1, Z1)) having the positional coordinates that have been detected by the real-object position detecting unit 11 and also has a direction vector oriented in the same direction (e.g., (X1-X2, Y1-Y2, Z1-Z2)) as the direction vector that has been calculated by the real-object position detecting unit 11. The three-dimensional image 30 that has been rendered in this manner is displayed in the position that is successive or close to the handheld device 8. In the following explanation, the three-dimensional image 30 that is displayed in a position that is successive or close to the handheld device 8 will be referred to as a conjunctive three-dimensional image 30.
Shown in
Also, in the example shown in
With this arrangement where the conjunctive three-dimensional image 30 is displayed in a position that is successive or close to the handheld device 8, it is possible to display the conjunctive three-dimensional image 30 and the handheld device 8 integrally. As a result, it is possible to provide, for the user, the handheld device 8 that is virtually extended by the size of the displayed conjunctive three-dimensional image 30. Thus, the user is able to move the conjunctive three-dimensional image 30 that is integrally displayed with the handheld device 8, by moving the handheld device 8. Thus, the user is able to intuitively operate the conjunctive three-dimensional image 30.
When performing the calculation process, the three-dimensional image rendering unit 12 checks the display position of the conjunctive three-dimensional image 30 on the stereoscopic image displaying unit 5. In a case where a part or all of the conjunctive three-dimensional image 30 is to be displayed behind the display screen of the stereoscopic image displaying unit 5 (i.e., displayed on the virtual image side), the three-dimensional image rendering unit 12 causes the part or all of the conjunctive three-dimensional image 30 corresponding to the portion that extends through the display screen to be displayed as an optical virtual image.
In other words, the three-dimensional image rendering unit 12 causes the conjunctive three-dimensional image 30 positioned behind the display screen of the stereoscopic image displaying unit 5 (i.e., on the virtual image side) to be displayed as an optical virtual image and causes the conjunctive three-dimensional image 30 positioned to the front of the display screen of the stereoscopic image displaying unit 5 (i.e., on the real image side) to be displayed as an optical real image. With this arrangement, the user is able to directly point to even the virtual image side of the stereoscopic image displaying unit 5 by using the conjunctive three-dimensional image 30 and operating the handheld device 8.
For example, as shown in
Next, an operation of the stereoscopic image display apparatus 100 according to the first embodiment will be explained, with reference to
First, the stereo cameras 71 and 72 photograph light beams emitted from the point-like light emitting members 81 and 82 that are provided in the handheld device 8 (step S11). The real-object position detecting unit 11 then derives the position and the orientation direction of the handheld device 8 with respect to the stereoscopic image displaying unit 5, based on the photograph information obtained by the stereo cameras 71 and 72 (step S12).
Next, the three-dimensional image rendering unit 12 performs a calculation process for rendering a three-dimensional image in a position that is successive or close to the handheld device 8, based on the position and the orientation direction of the handheld device 8 that have been derived at step S12 (step S13) and causes the three-dimensional image to be displayed as the conjunctive three-dimensional image 30 in a position that is successive or close to the handheld device 8 (step S14).
At the following step, namely step S15, the real-object position detecting unit 11 judges whether this process should be finished. In a case where, for example, the photograph information that is input from the stereo cameras 71 and 72 includes photographed pictures of the emitted light beams (step S15: No), the process returns to step S11.
On the other hand, at step S15, in a case where the photograph information that is input from the stereo cameras 71 and 72 includes no photographed pictures of the emitted light beams because, for example, the handheld device 8 is positioned on the outside of the photographing areas of the stereo cameras 71 and 72 (step S15: Yes), the process is finished.
As explained above, because the conjunctive three-dimensional image 30 is displayed in a position that is successive or close to the handheld device 8, it is possible to display the conjunctive three-dimensional image 30 integrally with the handheld device 8. Thus, it is possible to virtually extend the handheld device 8 by the size of the displayed conjunctive three-dimensional image 30. With this arrangement, the user is able to directly point to another object image 40 displayed by the stereoscopic image displaying unit 5 by using the conjunctive three-dimensional image 30 that is integrally displayed with the handheld device 8 and operating the handheld device 8. Thus, it is possible to improve operability for the three-dimensional images.
Next, a stereoscopic image display apparatus according to a second embodiment of the present invention will be explained. Some of the constituent elements that are the same as those explained in the first embodiment will be referred to by using the same reference characters, and the explanation thereof will be omitted.
Based on the position of another three-dimensional image (hereinafter, an “object image”) 40 other than the conjunctive three-dimensional image 30 that is displayed by the three-dimensional image rendering unit 14 and the position of the conjunctive three-dimensional image 30 that is displayed together with the handheld device 8 by the three-dimensional image rendering unit 14, the collision judging unit 13 judges whether the two three-dimensional images collide with each other. Also, when having judged that the two three-dimensional images collide with each other, the collision judging unit 13 outputs collision position information related to the collision position of the two three-dimensional images to the three-dimensional image rendering unit 14. Let us assume that it is possible to obtain the positions of the conjunctive three-dimensional image 30 and the object image 40, based on, for example, a result of a calculation process performed by the three-dimensional image rendering unit 14.
The three-dimensional image rendering unit 14 has functions that are similar to those of the three-dimensional image rendering unit 12 explained above. The three-dimensional image rendering unit 14 causes the conjunctive three-dimensional image 30 to be displayed in a position that is successive or close to the handheld device 8 and also causes the object image 40 to be displayed on one or both of the real image side and the virtual image side of the stereoscopic image displaying unit 5.
In addition, the three-dimensional image rendering unit 14 exercises control so as to change rendering of the object image 40 corresponding to the collision position that is indicated in the collision position information, based on the collision position information that has been input from the collision judging unit 13.
Let us discuss an example in which, as shown in
Next, an operation of the stereoscopic image display apparatus 101 according to the second embodiment will be explained, with reference to
First, the real-object position detecting unit 11 controls the stereo cameras 71 and 72 so that the stereo cameras 71 and 72 photograph light beams emitted from the point-like light emitting members 81 and 82 that are provided in the handheld device 8 (step S21). The real-object position detecting unit 11 then derives the position and the orientation direction of the handheld device 8 with respect to the stereoscopic image displaying unit 5, based on the photograph information obtained by the stereo cameras 71 and 72 (step S22).
Next, the three-dimensional image rendering unit 14 performs a calculation process for rendering a three-dimensional image in a position that is successive or close to the handheld device 8, based on the position and the orientation direction of the handheld device 8 that have been derived at step S22 (step S23) and causes the three-dimensional image to be displayed as the conjunctive three-dimensional image 30 in a position that is successive or close to the handheld device 8 (step S24).
Subsequently, based on the display positions of the conjunctive three-dimensional image 30 and the object image 40 that are displayed by the three-dimensional image rendering unit 14, the collision judging unit 13 judges whether these two images collide with each other (step S25). In a case where the collision judging unit 13 has judged that the conjunctive three-dimensional image 30 and the object image 40 do not collide with each other (step S25: No), the process immediately proceeds to the procedure at step S27.
On the other hand, at step S25, in a case where the collision judging unit 13 has judged that the conjunctive three-dimensional image 30 and the object image 40 collide with each other (step S25: Yes), the three-dimensional image rendering unit 14 changes the rendering of the object image 40 corresponding to the collision position, based on the collision position information that has been obtained by the collision judging unit 13 (step S26), and the process proceeds to the procedure at step S27.
At the following step, namely step S27, the real-object position detecting unit 11 judges whether this process should be finished. In a case where, for example, the position information of the handheld device 8 is continually input from the stereo cameras 71 and 72 (step S27: No), the process returns to step S21.
On the other hand, at step S27, in a case where the position information of the handheld device 8 is no longer input because, for example, the handheld device 8 is positioned on the outside of the photographing areas of the stereo cameras 71 and 72 (step S27: Yes), the process is finished.
As explained above, according to the second embodiment, it is possible to directly point to the other three-dimensional image that is displayed by the stereoscopic image displaying unit 5, by using the conjunctive three-dimensional image 30 displayed in a position that is successive or close to the handheld device 8. Thus, it is possible to improve operability for the three-dimensional images.
In addition, because it is possible to change the display of the object image 40 according to the collision of (the contact between) the conjunctive three-dimensional image 30 and the object image 40. Thus, it is possible to improve interactiveness.
In the second embodiment, the example is explained in which, when it has been judged that the images collide with each other, the rendering of only the collided object image 40 is changed. However, the present invention is not limited to this example. Another arrangement is acceptable in which the rendering of only the conjunctive three-dimensional image 30 is changed, while the rendering of the collided object image 40 is not changed. Yet another arrangement is acceptable in which the rendering of both of the three-dimensional images is changed.
Next, a stereoscopic image display apparatus according to a third embodiment of the present invention will be explained. Some of the constituent elements that are the same as those explained in the first and the second embodiments will be referred to by using the same reference characters, and the explanation thereof will be omitted.
The area judging unit 15 judges whether the handheld device 8 is present within a space area A that is specified near the stereoscopic image displaying unit 5, based on the position and the orientation direction of the handheld device 8 that have been derived by the real-object position detecting unit 11 and outputs a result of the judging process to the three-dimensional image rendering unit 16, as space position information.
More specifically, the area judging unit 15 compares coordinate data of the space area A that is stored in advance with the position and the orientation direction of the handheld device 8 that have been derived by the real-object position detecting unit 11. In a case where the handheld device 8 is positioned on the outside of the space area A, the area judging unit 15 outputs space position information indicating this situation to the three-dimensional image rendering unit 16. It is assumed that the coordinate data of the space area A is stored in the HDD 4 (i.e., the image storing unit) in advance. It is preferable to have an arrangement in which the area specified as the space area A is substantially the same as an area (i.e., the display space) in which a viewer is able to properly view the three-dimensional images displayed by the stereoscopic image displaying unit 5.
According to the third embodiment, the information indicating that the handheld device 8 is positioned on the outside of the space area A is output as the space position information. However, another arrangement is acceptable in which information indicating a relative positional relationship between the space area A and the conjunctive three-dimensional image 30 is output as the space position information. In this situation, an additional arrangement is acceptable in which, at a point in time when it has been judged that the conjunctive three-dimensional image 30 is positioned near the boundary of the space area A, the information indicating the relative positional relationship between the space area A and the conjunctive three-dimensional image 30 is output.
The three-dimensional image rendering unit 16 has functions that are similar to those of the three-dimensional image rendering unit 14 explained above. In addition, when having confirmed that the handheld device 8 is positioned on the outside of the space area A based on the space position information that has been input from the area judging unit 15, the three-dimensional image rendering unit 16 changes the level of transparency that is used when the conjunctive three-dimensional image 30 is rendered, from 100 percent to zero so that the conjunctive three-dimensional image 30 is not displayed.
Next, an operation of the stereoscopic image display apparatus 102 according to the third embodiment will be explained, with reference to
First, the real-object position detecting unit 11 controls the stereo cameras 71 and 72 so that the stereo cameras 71 and 72 photograph light beams emitted from the point-like light emitting members 81 and 82 that are provided in the handheld device 8 (step S31). The real-object position detecting unit 11 then calculates the position and the orientation direction of the handheld device 8 with respect to the stereoscopic image displaying unit 5, based on the photograph information obtained by the stereo cameras 71 and 72 (step S32).
Next, the three-dimensional image rendering unit 16 performs a calculation process for rendering a three-dimensional image in a position that is successive or close to the handheld device 8, based on the position and the orientation direction of the handheld device 8 that have been derived at step S32 (step S33). At this time, the area judging unit 15 compares the position and the orientation direction of the handheld device 8 that have been calculated at step S32 with the coordinate data of the space area A and judges whether the handheld device 8 is present within the space area A (step S34).
At step S34, in a case where the area judging unit 15 has judged that the handheld device 8 is not present within the space area A (step S34: No), the three-dimensional image rendering unit 16 sets the level of transparency that is used when the conjunctive three-dimensional image 30 is rendered to zero, based on the judgment result (step S35), and the process proceeds to the procedure at step S39.
On the other hand, at step S34, in a case where the area judging unit 15 has judged that the handheld device 8 is present within the space area A (step S34: Yes), the three-dimensional image rendering unit 16 causes the three-dimensional image obtained in the calculation process at step S33 to be displayed as the conjunctive three-dimensional image 30 in a position that is successive or close to the handheld device 8 (step S36).
Subsequently, based on the display positions of the conjunctive three-dimensional image 30 and the object image 40 that are displayed by the three-dimensional image rendering unit 16, the collision judging unit 13 judges whether these two images collide with each other (step S37). In a case where the collision judging unit 13 has judged that the conjunctive three-dimensional image 30 and the object image 40 do not collide with each other (step S37: No), the process immediately proceeds to the procedure at step S39.
On the other hand, at step S37, in a case where the collision judging unit 13 has judged that the conjunctive three-dimensional image 30 and the object image 40 collide with each other (step S37: Yes), the three-dimensional image rendering unit 16 changes the rendering of the object image 40 corresponding to the collision position, based on the collision position information that has been obtained by the collision judging unit 13 (step S38), and the process proceeds to the procedure at step S39.
At the following step, namely step S39, the real-object position detecting unit 11 judges whether this process should be finished. In a case where, for example, the position information of the handheld device 8 is continually input from the stereo cameras 71 and 72 (step S39: No), the process returns to step S31.
On the other hand, at step S39, in a case where the position information of the handheld device 8 is no longer input because, for example, the handheld device 8 is positioned on the outside of the photographing areas of the stereo cameras 71 and 72 (step S39: Yes), the process is finished.
As explained above, according to the third embodiment, it is possible to exercise control so that the image is not displayed in the case where the handheld device 8 is in a position farther than the display limit for three-dimensional images. Thus, it is possible to exercise control so that the conjunctive three-dimensional image 30 will not be displayed any more than necessary.
In the third embodiment, the example is explained in which, when the handheld device 8 is positioned on the outside of the space area A, the control is exercised so that the conjunctive three-dimensional image 30 will not be displayed by changing the level of transparency that is used when the conjunctive three-dimensional image 30 is rendered together with the handheld device 8, from 100 to zero. However, the present invention is not limited to this example. For example, another arrangement is acceptable in which, in a case where the area judging unit 15 outputs space positional information indicating a relative positional relationship between the space area A and the conjunctive three-dimensional image 30, the three-dimensional image rendering unit 16 changes, in stages, the level of transparency that is used when the conjunctive three-dimensional image 30 is rendered, according to the relative positional relationship. In this situation, for example, by having an arrangement in which the level of transparency that is used when the conjunctive three-dimensional image 30 is rendered is lowered in stages, while the handheld device 8 gets closer to the boundary portion of the space area A, it is possible to express the disappearance of the conjunctive three-dimensional image 30 in a more natural manner.
Next, a stereoscopic image display apparatus according to a fourth embodiment of the present invention will be explained. Some of the constituent elements that are the same as those explained in the first and the second embodiments will be referred to by using the same reference characters, and the explanation thereof will be omitted.
The three-dimensional image rendering unit 17 has functions that are similar to those of the three-dimensional image rendering unit 14 explained above. In addition, as shown in
Further, when having received collision position information indicating that the conjunctive three-dimensional image 30 has collided with (has come in contact with) one of the image candidates 61 to 63 displayed in the selection area 60 from the collision judging unit 13, the three-dimensional image rendering unit 17 causes the three-dimensional image corresponding to the image candidate in the collision position indicated in the collision position information to be displayed as the conjunctive three-dimensional image 30. It is assumed that the three-dimensional images that are displayed as the image candidates are stored in the HDD 4 (i.e., the image storing unit in advance.
Shown in
Next, an operation of the stereoscopic image display apparatus 103 according to the fourth embodiment will be explained, with reference to
First, the real-object position detecting unit 11 controls the stereo cameras 71 and 72 so that the stereo cameras 71 and 72 photograph light beams emitted from the point-like light emitting members 81 and 82 that are provided in the handheld device 8 (step S41). The real-object position detecting unit 11 then derives the position and the orientation direction of the handheld device 8 with respect to the stereoscopic image displaying unit 5, based on the photograph information obtained by the stereo cameras 71 and 72 (step S42).
Next, the three-dimensional image rendering unit 17 performs a calculation process for rendering a three-dimensional image in a position that is successive or close to the handheld device 8, based on the position and the orientation direction of the handheld device 8 that have been derived at step S42 (step S43) and causes the three-dimensional image to be displayed as the conjunctive three-dimensional image 30 in a position that is successive or close to the handheld device 8 (step S44).
Subsequently, the collision judging unit 13 judges whether the conjunctive three-dimensional image 30 displayed by the three-dimensional image rendering unit 17 collides with the object image 40 or any of the image candidates 61 to 63 (step S45). In a case where the collision judging unit 13 has judged that the conjunctive three-dimensional image 30 collides with no image (step S45: No), the process immediately proceeds to the procedure at step S49.
On the other hand, at step S45, in a case where the collision judging unit 13 has judged that the conjunctive three-dimensional image 30 collides with the object image 40 or one or more of the image candidates 61 to 63 (step S45: Yes), the three-dimensional image rendering unit 17 judges whether the conjunctive three-dimensional image 30 collides with one or more of the image candidates 61 to 63, based on the collision position information that has been obtained by the collision judging unit 13 (step S46).
In this situation, in a case where the collision judging unit 13 has judged that the conjunctive three-dimensional image 30 collides with one or more of the image candidates 61 to 63 (step S46: Yes), the three-dimensional image rendering unit 17 causes a three-dimensional image corresponding to the image candidate in the collision position to be displayed as the conjunctive three-dimensional image 30 (step S47), and the process proceeds to the procedure at step S49.
On the other hand, at step S46, in the case where the collision judging unit 13 has judged that the conjunctive three-dimensional image 30 collides with the object image 40 (step S46: No), the three-dimensional image rendering unit 17 changes the rendering of the object image 40 corresponding to the collision position, based on the collision position information obtained by the collision judging unit 13 (step S48), and the process proceeds to the procedure at step S49.
At the following step, namely step S49, the real-object position detecting unit 11 judges whether this process should be finished. In a case where, for example, the position information of the handheld device 8 is continually input from the stereo cameras 71 and 72 (step S49: No), the process returns to step S41.
On the other hand, at step S49, in a case where the position information of the handheld device 8 is no longer input because, for example, the handheld device 8 is positioned on the outside of the photographing areas of the stereo cameras 71 and 72 (step S49: Yes), the process is finished.
As explained above, according to the fourth embodiment, it is possible to easily change the image of the conjunctive three-dimensional image 30. Thus, it is possible to improve interactiveness.
Next, a stereoscopic image display apparatus according to a fifth embodiment of the present invention will be explained. Some of the constituent elements that are the same as those explained in the first and the second embodiments will be referred to by using the same reference characters, and the explanation thereof will be omitted.
The rotation angle detecting unit 18 detects a rotation angle of the handheld device 8 on a predetermined axis. The method for detecting the rotation angle may be selected out of various types of methods. According to the fifth embodiment, the rotation angle of the handheld device 8 on the predetermined axis is detected by using a method described below. In the following sections, the method for detecting the rotation angle of the handheld device 8 will be explained, with reference to
Like the point-like light emitting members 81 and 82 explained above, the point-like light emitting member 83 may be configured with a point light source of a light emitting diode or the like. The linear light emitting member 84 is provided so as to extend in a circle around the axis B of the handheld device 8. The linear light emitting member 84 may be configured with, for example, a translucent disc trough which light can be guided and a light emitting diode that is placed at the center thereof. With this arrangement, the light beam emitted from the light emitting diode travels within the translucent disc, so as to be irradiated through the outer circumference of the disc to the outside of the disc, and thus the linear light emitting member 84 is formed. It is acceptable to arbitrarily set the direction in which the axis of the handheld device 8 extends, which is used as a reference for detecting the rotation angle. However, it is preferable to set the direction according to the position in which the handheld device 8 is held by the user.
Shown in
As shown in
The rotation angle detecting unit 18 derives the rotation angle of the handheld device 8 on the predetermined axis by using the principle explained above, based on the photographed pictures of the emitted light beams included in the photographed images that have been input from the stereo cameras 71 and 72. The rotation angle detecting unit 18 then outputs the derived rotation angle as angle information to the three-dimensional image rendering unit 19.
In the same manner as described above, the real-object position detecting unit 11 derives the position and the orientation direction of the handheld device 8, based on the photographed pictures of the emitted light beams included in the photographed images that have been input from the stereo cameras 71 and 72.
With reference to
Returning to the description of
Next, an operation of the stereoscopic image display apparatus 104 according to the fifth embodiment will be explained, with reference to
First, the real-object position detecting unit 11 controls the stereo cameras 71 and 72 so that the stereo cameras 71 and 72 photograph light beams emitted from the point-like light emitting member 83 and the linear light emitting member 84 that are provided in the handheld device 8 (step S51). The real-object position detecting unit 11 then derives the position and the orientation direction of the handheld device 8 with respect to the stereoscopic image displaying unit 5, based on the photograph information obtained by the stereo cameras 71 and 72 (step S52).
Next, the rotation angle detecting unit 18 derives the rotation angle of the handheld device 8 on a predetermined axis, based on the photograph information obtained by the stereo cameras 71 and 72 (step S53). The three-dimensional image rendering unit 19 then performs a calculation process for rendering a three-dimensional image in a position that is successive or close to the handheld device 8, based on the position and the orientation direction of the handheld device 8 that have been derived at step S52 and the rotation angle of the handheld device 8 that has been derived at step S53 (step S54) and causes the three-dimensional image to be displayed as the conjunctive three-dimensional image 30 in the position that is successive or close to the handheld device 8 (step S55).
Subsequently, based on the display positions of the conjunctive three-dimensional image 30 and the object image 40 that are displayed by the three-dimensional image rendering unit 19, the collision judging unit 13 judges whether the conjunctive three-dimensional image 30 and the object image 40 collide with each other (step S56). In a case where the collision judging unit 13 has judged that the conjunctive three-dimensional image 30 and the object image 40 do not collide with each other (step S56: No), the process immediately proceeds to the procedure at step S58.
On the other hand, at step S56, in a case where the collision judging unit 13 has judged that the conjunctive three-dimensional image 30 and the object image 40 collide with each other (step S56: Yes), the three-dimensional image rendering unit 19 changes the rendering of the object image 40 corresponding to the collision position, based on the collision position information that has been obtained by the collision judging unit 13 (step S57), and the process proceeds to the procedure at step S58.
At the following step, namely step S58, the real-object position detecting unit 11 judges whether this process should be finished. In a case where, for example, the photograph information that is input from the stereo cameras 71 and 72 includes photographed pictures of the emitted light beams (step S58: No), the process returns to step S51.
On the other hand, at step S58, in a case where the photograph information that is input from the stereo cameras 71 and 72 includes no photographed pictures of the emitted light beams because, for example, the handheld device 8 is positioned on the outside of the photographing areas of the stereo cameras 71 and 72 (step S58: Yes), the process is finished.
As explained above, according to the fifth embodiment, the display of the conjunctive three-dimensional image 30 is changed according to the rotation angle of the handheld device 8. Thus, it is possible to display the conjunctive three-dimensional image 30 in a more realistic manner. Consequently, it is possible to further improve the interactiveness.
The first to the fifth embodiments of the present invention have been explained above. However, the present invention is not limited to these exemplary embodiments. It is possible to apply various modifications, replacements, and additions to these embodiments without departing from the scope of the present invention. For example, it is possible to make the speed of the calculation processes higher by using a Graphics Processing Unit (GPU) together with the CPU.
The program executed by the stereoscopic image display apparatus 100 according to the embodiment is provided as being incorporated in advance in the ROM 2 or the HDD 4. However, the present invention is not limited to this arrangement. Another arrangement is acceptable in which the program is provided as being recorded on a computer readable recording medium such as a Compact Disk Read-Only Memory (CD-ROM), a flexible Disk (FD), a Compact Disk Recordable (CD-R), or Digital Versatile Disk (DVD), in a file that is in an installable format or in an executable format. Yet another arrangement is acceptable in which the program is stored in a computer that is connected to a network such as the Internet and is provided as being downloaded via the network. It is also acceptable to provide or distribute the program via a network such as the Internet.
Additional advantages and modifications will readily occur to those skilled in the art. Therefore, the invention in its broader aspects is not limited to the specific details and representative embodiments shown and described herein. Accordingly, various modifications may be made without departing from the spirit or scope of the general inventive concept as defined by the appended claims and their equivalents.
Number | Date | Country | Kind |
---|---|---|---|
2007-057551 | Mar 2007 | JP | national |
Filing Document | Filing Date | Country | Kind | 371c Date |
---|---|---|---|---|
PCT/JP2008/054105 | 2/29/2008 | WO | 00 | 7/17/2008 |