This application is based upon and claims the benefit of priority from the prior Japanese Patent Application No. 2012-037502, filed on Feb. 23, 2012; the entire contents of which are incorporated herein by reference.
Embodiments described herein relate generally to an image display apparatus.
Augmented reality presentation technology fuses the real world and the virtual world. In such technology, a virtual object image is superimposed onto a real-world image. Or, a real-world image is acquired using a camera; the image that is acquired is used as the virtual world; and a virtual object image is superimposed onto this image.
Harmony between the real world and virtual objects is desirable in new high-presence displays.
According to one embodiment, an image display apparatus includes a data output unit, a first display device and a second display device. The data output unit is configured to output first data and second data. The first data includes information of a first image. The second data includes information of a second image. The first display device includes a first display unit configured to display the first image based on the first data. The first display unit is optically transmissive. The second display device includes a second display unit configured to display the second image based on the second data. The second image displayed by the second display unit is viewable by a human viewer via the first display unit.
The data output unit is configured to implement at least one selected from a first output operation and a second output operation.
The first output operation includes a first operation and a second operation. The first operation is configured to output the first data including the information of the first image including a first display object. The second operation is configured to output the second data including the information of the second image including a second display object after the first operation based on the first display object. A position of the second display object in the second image is a position overlaying the first display object as viewed by the human viewer or a position on an extension of a movement of the first display object as viewed by the human viewer.
The second output operation includes a third operation and a fourth operation. The third operation is configured to output the second data including the information of the second image including a third display object. The fourth operation is configured to output the first data including the information of the first image including a fourth display object after the third operation based on the third display object. A position of the fourth display object in the first image is a position overlaying the third display object as viewed by the human viewer or a position on an extension of a movement of the third display object as viewed by the human viewer.
Various embodiments will be described hereinafter with reference to the accompanying drawings.
The drawings are schematic or conceptual; and the relationships between the thicknesses and the widths of portions, the proportions of sizes between portions, etc., are not necessarily the same as the actual values thereof. Further, the dimensions and/or the proportions may be illustrated differently between the drawings, even for identical portions.
In the drawings and the specification of the application, components similar to those described in regard to a drawing thereinabove are marked with like reference numerals, and a detailed description is omitted as appropriate.
As illustrated in
The data output unit 30 outputs first data 10d of a first image and second data 20d of a second image. The first data 10d is supplied to the first display device 10 from the data output unit 30. The second data 20d is supplied to the second display device 20 from the data output unit 30.
For example, the first data 10d and the second data 20d are generated by the data output unit 30. As described below, the first data 10d and the second data 20d may be generated by a portion separate from the data output unit 30. In such a case, the data output unit 30 acquires the first data 10d and the second data 20d that are generated via any communication path and outputs the first data 10d and the second data 20d that are acquired.
The first display device 10 includes a first display unit 11. The first display unit 11 displays the first image based on the first data 10d. The first display unit 11 is optically transmissive. For example, the first display device 10 is a head mounted display device (HMD) that is wearable by a human viewer 80. For example, the first display device 10 is linked to the movement of the head 80h of the user (the human viewer 80).
For example, the first display unit 11 includes a left eye display unit 11a that is arrangeable in front of the left eye of the human viewer 80, and a right eye display unit 11b that is arrangeable in front of the right eye of the human viewer 80. In addition to the first display unit 11, the first display device 10 further includes a holding unit 13 that enables the first display unit 11 to be held by the head of the human viewer 80. For example, the first display unit 11 is disposed at a position corresponding to the lenses of glasses. For example, the holding unit 13 is a portion corresponding to the temple arms of glasses. An example of the first display device 10 is described below.
The second display device 20 includes a second display unit 21. The second display unit 21 displays the second image based on the second data 20d. In this example, the second display device 20 further includes a housing 23 that contains the second display unit 21. The second image that is displayed by the second display unit 21 is viewable by the human viewer 80 via the first display unit 11. For example, the second display device 20 is not linked to the movement of the head 80h of the human viewer 80. For example, the second display device 20 is a stationary display device. Or, the second display device 20 may be a portable display device that is not linked to the movement of the head 80h of the human viewer 80.
For example, the data output unit 30 is connected to an input/output unit 38 via a wired or wireless communication path 38a. The input/output unit 38 is connected to a component outside the image display apparatus 110 via a wired or wireless communication path 38b. Image data IDImage data ID for the displays is supplied from outside the image display apparatus 110 to the data output unit 30 via the communication path 38b, the input/output unit 38, and the communication path 38a.
For example, the data output unit 30 generates the first data 10d and the second data 20d based on the Image data IDImage data ID. Then, the data output unit 30 outputs the first data 10d and the second data 20d that are generated.
In this example, the image display apparatus 110 further includes a first sensor 50. The first sensor 50 detects the relative position and the relative angle between the first display device 10 and the second display device 20. The first sensor 50 may include, for example, an imaging device (a camera, etc.), a combination of an electromagnetic wave emitting unit and an electromagnetic wave detection unit, a combination of a sound wave emitting unit and a sound wave detection unit, etc.
The first sensor 50 is connected to the data output unit 30 by a wired or wireless first sensor communication path 35. Position/orientation data 35d relating to the relative position and the relative angle between the first display device 10 and the second display device 20 that are detected by the first sensor 50 is supplied to the data output unit 30 by way of the first sensor communication path 35.
In this example, the image display apparatus 110 further includes a sound producing unit 40. The sound producing unit 40 may include, for example, a speaker, etc. The sound producing unit 40 is connected to the data output unit 30 via a wired or wireless sound producing unit communication path 34. The sound producing unit 40 produces sound in an operation (at least one operation selected from the first output operation and the second output operation described below) of the data output unit 30. The sound producing unit 40 may be additionally provided in the first display device 10. The sound producing unit 40 may be additionally provided in the second display device 20.
The second display device 20 may include, for example, a liquid crystal display device, an organic electroluminescence display device, a plasma display device, a projection display device (e.g., a projector), etc. The first sensor 50 may include, for example, an imaging device and an electronic device that performs image analysis of the image that is imaged by the imaging device. The data output unit 30 may include a computer. For example, the computer may include a display controller. The computer is connectable to a network. The computer may include a communication terminal that is connectable to a computer in the cloud.
As illustrated in
In the embodiment, the configuration of the first display device 10 is arbitrary; and, for example, the first display unit 11 may generate the first image 11d.
Because the first display unit 11 is optically transmissive, it is possible for the human viewer 80 to view an image of the background (a background image BGI) via the first display unit 11. For example, the background image BGI is a second image 21d that is displayed by the second display unit 21 of the second display device 20. Or, for example, the background image BGI is an image of an object (an object image D70) existing in a region around the first display device 10 (around the human viewer 80). For example, the object image D70 may be a virtual image formed by a mirror, etc. The object image D70 may be an image of an object existing in the region around the first display device 10 (around the human viewer 80). The object image D70 may be the image of at least a portion of the body of the human viewer 80 that exists in the region around the first display device 10.
The first image 11d displayed by the first display unit 11 may be viewed by the human viewer 80 as being superimposed onto the background image BGI (e.g., at least one selected from the second image 21d and the object image D70) which is viewed via the first display unit 11.
These drawings also illustrate a display method according to a second embodiment described below.
The data output unit 30 implements at least one selected from a first output operation S1 illustrated in
As illustrated in
In the first operation S110, the data output unit 30 outputs the first data 10d to display a first display object in the first image 11d. The first data 10d that is output is supplied to the first display device 10.
The data output unit 30 implements the second operation S120 after the first operation S110. In the second operation S120, the data output unit 30 outputs the second data 20d to display a display object (a second display object) in the second image 21d based on the first display object at a position overlaying the first display object as viewed by the human viewer 80 or at a position on an extension of the movement of the first display object as viewed by the human viewer 80. The second data 20d that is output is supplied to the second display device 20.
In the third operation S130, the data output unit 30 outputs the second data 20d to display a third display object in the second image 21d. The second data 20d that is output is supplied to the second display device 20.
The data output unit 30 implements the fourth operation S140 after the third operation S130. In the fourth operation S140, the data output unit 30 outputs the first data 10d to display a display object (a fourth display object) in the first image 11d based on the third display object at a position overlaying the third display object as viewed by the human viewer 80 or at a position on an extension of the movement of the third display object as viewed by the human viewer 80. The first data 10d that is output is supplied to the first display device 10.
First, an example of the second output operation S2 will be described.
These drawings illustrate an example of the second output operation S2. These drawings illustrate the application example in which the human viewer 80 views circumstances from real space, and a designated display object of an image of a movie, television, etc., jumps out from the virtual space. In this example, an image of a sport is displayed as the second image 21d by the second display unit 21 of the second display device 20. In the second image 21d, the image of a ball used in the sport may be used as the designated display object.
For example, at the first time t11 as illustrated in
This operation is performed by the data output unit 30. In other words, the data output unit 30 outputs the second data 20d to display the third display object D20 in the second image 21d (the third operation S130).
At this time, an image relating to the third display object D20 (the ball) is not displayed by the first display unit 11. As illustrated in
At the second time t12 as illustrated in
As illustrated in
At the third time t13 as illustrated in
This operation is performed by the data output unit 30. In other words, the data output unit 30 outputs the first data 10d to display the display object (the fourth display object D25) in the first image 11d based on the third display object at a position overlaying the third display object D20 as viewed by the human viewer 80 or at a position on an extension of the movement of the third display object D20 as viewed by the human viewer 80 (the fourth operation S140).
Thereby, as illustrated in
At the fourth time t14 as illustrated in
As illustrated in
For example, in the case where the first display device 10 is not used and only the second display device 20 is used as the image display apparatus, the human viewer 80 perceives the display state illustrated in
Conversely, in the image display apparatus 110 according to the embodiment, the first display device 10 is used to display the fourth display object D25 (the ball) based on the third display object D20 in the first image 11d of the first display unit 11. Thereby, as described in regard to
For example, in a powerful scene, an image of a portion (in this example, the ball, i.e., the fourth display object D25) is displayed in the first image 11d to move the image from the second image 21d into the first image 11d. Then, the ball (the fourth display object D25) that jumped out from the second display unit 21 is perceived to be superimposed onto real space. Thereby, the ball is perceived to exist inside real space.
According to the image display apparatus 110 according to the embodiment, the virtual object can be moved seamlessly between the virtual space and real space. Thereby, an image display apparatus having a strong sense of presence and better harmony between real space and the virtual object can be provided.
In the fourth operation S140, the display object (the fourth display object D25) based on the third display object D20 is displayed in the first image 11d. In the fourth operation S140, the third display object D20 may be erased from the second image 21d. Or, in the fourth operation S140, the contrast of the third display object D20 may be lower than that in the previous state (e.g., the state of the third operation S130). Thereby, the third display object D20 is difficult to view in the second image 21d.
For example, in the fourth operation S140, the third display object D20 is substantially erased from the second image 21d; and the fourth display object D25 based on the third display object D20 is displayed in the first image 11d. Thereby, the third display object D20 is perceived to move more naturally from the second image 21d into the first image 11d. Thereby, the harmony between real space and the virtual object can be even better; and the sense of presence can be even stronger.
Thus, in the fourth operation S140, the data output unit 30 may further implement outputting the second data 20d to erase the third display object D20 from the second image 21d. In other words, the data output unit 30 may further implement outputting the second data 20d in the fourth operation S140 to include the information of the second image 21d not including the third display object D20. The data output unit 30 may further implement outputting the second data 20d including the information of the second image 21d such that the ratio of the luminance of the third display object D20 to the luminance around the third display object D20 in the second image 21d of the fourth operation S140 is lower than the ratio of the luminance of the third display object D20 to the luminance around the third display object D20 of the third operation S130.
In this example, the fourth operation S140 is implemented according to the movement of the third display object D20 in the second image 21d in the third operation S130. In other words, the fourth operation S140 is started using the movement of the third display object D20 as a trigger. For example, the fourth operation S140 is implemented when the third display object D20 is displayed to move from the depthward portion of the second image 21d toward the front as viewed by the human viewer 80 in the third operation S130. For example, the fourth operation S140 is not implemented when the third display object D20 is displayed to move from the front toward the depthward portion of the second image 21d as viewed by the human viewer 80 in the third operation S130. Thus, when the third display object D20 is displayed in a designated state, the fourth operation S140 is implemented; and the fourth display object D25 is displayed such that the third display object D20 appears to move from the second image 21d into the first image 11d. Thereby, an even stronger sense of presence can be provided to the human viewer 80.
For example, the data output unit 30 implements the fourth operation S140 when the movement of the third display object D20 in the second image 21d in the third operation S130 meets a predetermined condition. For example, the predetermined condition may include the state in which the size of the third display object D20 increases over time. The predetermined condition may include the state in which the size of the third display object D20 changes continuously over time. Thereby, for example, as viewed by the human viewer 80, the third display object D20 appears to have moved from the second image 21d into the first image 11d in the case of the state in which the third display object D20 is perceived to move toward the human viewer 80.
These drawings illustrate another example of the second output operation S2. These drawings are an example in which the second display device 20 is used as digital signage. For example, the second display device 20 is mounted in a public location. In this example, an image including several pieces of merchandise is displayed as the second image 21d by the second display unit 21 of the second display device 20. The images of the merchandise displayed in the second image 21d may be used as the third display object D20.
For example, at the first time t21 as illustrated in
At this time, as illustrated in
In this state, the human viewer 80 causes a hand (the body 82) of the human viewer 80 to approach the second display unit 21. Thereby, the data output unit 30 implements the fourth operation S140 recited below.
At the second time t22 as illustrated in
In this example, a state is formed in which the user (the human viewer 80) grasps and views the virtual object (the merchandise) displayed by the second display unit 21 that displays the digital signage. According to the embodiment, an image display apparatus having a strong sense of presence and better harmony between real space and the virtual object can be provided.
In such a case as well, in the fourth operation S140, the third display object D20 is erased from the second image 21d, or the contrast of the third display object D20 is caused to be lower than that in the state of the third operation S130. Thereby, the sense of presence is even stronger.
In this example, the fourth operation S140 is implemented when the human viewer 80 moves the hand (the body 82). Thus, the image display apparatus 110 can be operated by the human viewer 80 moving the body 82 (any portion such as a hand, a leg, the torso, the head, etc.) of the human viewer 80. Thus, the operation by the human viewer 80 includes moving the body 82 of the human viewer 80. In this example, the data output unit 30 implements the fourth operation S140 based on the operation by the human viewer 80. Thereby, the human viewer 80 can bring the third display object D20 (e.g., the image of the merchandise) corresponding to the intention of the human viewer 80 into the first image 11d from the second image 21d. For example, the data of the image may be stored in any memory portion as the data of the fourth display object D25. The data that is stored may be extracted and displayed by any display device at any time.
An example of the first output operation S1 will now be described.
These drawings illustrate an example of the first output operation S1.
For example, at the first time t31 as illustrated in
For example, at the second time t32 after the first operation S110 as illustrated in
For example, the second display object D15 based on the first display object D10 which is the image of the work of art is displayed by the second image 21d that displays the image of the home of the human viewer 80. Thereby, the human viewer 80 can perceive the image of the work of art when placed in the home as the virtual space.
For example, the configuration and the color of the second display object D15 are set to be substantially the same as the configuration and the color of the first display object D10 of the first image 11d of the first display unit 11 as viewed by the human viewer 80.
The human viewer 80 can perceive that the first display object D10 of the first image 11d has been moved from the first image 11d into the second image 21d by the human viewer 80 viewing the second display object D15 displayed in the second image 21d.
In such a case, the data output unit 30 may further implement outputting the first data 10d to erase the first display object D10 from the first image 11d in the second operation S120. In other words, the data output unit 30 may further implement outputting the first data 10d including the information of the first image 11d not including the first display object D10 in the second operation S120. Also, the data output unit 30 may further implement outputting the first data 10d including the information of the first image 11d such that the ratio of the luminance of the first display object D10 to the luminance around the first display object D10 in the first image 11d of the second operation S120 is lower than the ratio of the luminance of the first display object D10 to the luminance around the first display object D10 of the first operation S110. In other words, the contrast of the first display object D10 in the second operation S120 is caused to be lower than that in the previous state (e.g., the state of the first operation S110). Thereby, the human viewer 80 perceives the first display object D10 as being substantially erased in the second operation S120. Thereby, the harmony between real space and the virtual object can be even better; and the sense of presence can be even stronger.
In this example, the second operation S120 is started by, for example, the human viewer 80 moving the hand (the body 82) toward the second display unit 21. In other words, the data output unit 30 implements the second operation S120 based on the operation by the human viewer 80.
The data output unit 30 may implement the second operation S120 when the movement of the first display object D10 in the first image 11d of the first operation S110 meets a predetermined condition. The predetermined condition includes, for example, a state in which a change of the first display object D10 exceeds a predetermined threshold value. For example, in the case where an image is displayed in which the form of an animal that is growing is drawn as the first display object D10, the image of the animal may be shown to jump into the second image 21d when the animal grows to a constant state. For example, the pupa of a butterfly may be displayed in the first image 11d; and when the adult butterfly emerges from the pupa, the butterfly may be moved into the second image 21d.
In the first operation S110 described in regard to
In the first operation S110, the position of the object image D70 when the human viewer 80 views the object image D70 (the hand, etc.) via the first display unit 11 is determined by, for example, the second sensor described below, etc.
The first output operation S1 and the second output operation S2 recited above may be performed simultaneously. The second output operation S2 may be implemented after the first output operation S1. The first output operation S1 may be implemented after the second output operation S2. The first output operation S1 may be implemented repeatedly. The second output operation S2 may be implemented repeatedly.
As illustrated in
In the fourth display object display operation S140a, the fourth display object D25 is displayed in the first image 11d. At this time, as viewed by the human viewer 80, the position of the fourth display object D25 in the first image 11d is a position overlaying the third display object D20 or a position on an extension of the movement of the third display object D20.
In the placement operation S140b, the fourth display object D25 is disposed at a prescribed position in the first image 11d. For example, the placement operation S140b is implemented after the fourth display object display operation S140a. Or, as described below, for example, the placement operation S140b is implemented simultaneously with the fourth display object display operation S140a. In the placement operation S140b, the data output unit 30 outputs the first data 10d including the information of the first image 11d to display the fourth display object D25 in the first image 11d using a reference, where the reference is the position of the object image D70 when the human viewer 80 views the object image D70 via the first display unit 11.
An example of the placement operation S140b will now be described. In the following example, the placement operation S140b is implemented after the fourth display object D25 is displayed once (after the fourth display object display operation S140a).
For example, at the first time t21 as illustrated in
As illustrated in
At this time, for example, the digital signage may record ID data additionally provided to the image that is moved, the time at which the image is moved, ID data additionally provided to the first display device 10 to which the image is moved, etc. By recording such data, data relating to customers (the human viewers 80) having a high likelihood of purchasing the merchandise can be acquired.
For example, data relating to the fourth display object D25 displayed by the first display unit 11 may be stored in a memory portion 10 mp provided in the first display device 10. Or, this data may be stored in any memory device connected to the first display device 10 by transferring this data to the memory device.
As illustrated in
For example, in the case where the object 70 is a desk as illustrated in
If the human viewer 80 likes the merchandise (the object corresponding to the image of the fourth display object D25), purchasing procedures are performed. Then, the merchandise is delivered to the home.
In this example, it is possible for the user to take the virtual object (the fourth display object D25) of the second image 21d of the digital signage by extending user's hand and moving the virtual object to another position. For example, the user may virtually take the merchandise home from a street corner display and virtually view the merchandise superimposed onto furniture, etc., of the home. If the merchandise is liked, it is possible to procure the merchandise and subsequently receive the merchandise.
Thus, according to the image display apparatus 110 according to the embodiment, an image display apparatus having a strong sense of presence and better harmony between real space and the virtual object can be provided. According to the embodiment, for example, a display is possible in which the user can seamlessly take a virtual object from three-dimensional virtual space into real space. Then, the virtual object can be moved with the movement of the user. Also, a display is provided in which the user can seamlessly bring a virtual object that is superimposed onto real space into the virtual space.
An example of augmented reality presentation technology includes technology that superimposes a virtual object onto real space. Also, there is technology that acquires real space using a camera, uses the image that is acquired by the camera as the virtual space, and superimposes the desired virtual object onto the virtual space. In these technologies, the virtual object is not perceived to move between real space and the virtual space.
On the other hand, in a DFD (Depth-fused 3D Display) that displays two-dimensional images on multiple light-transmitting two-dimensional displays and overlays the displays, a stereoscopic image is localized in the space between the displays. In this technology as well, the virtual object is not perceived to move between real space and the virtual space.
Also, for example, there is technology that acquires a real object using a camera and links the real object to other information in a virtual space projected by a fixed projector. Also, there is an application in which the user moves a virtual object to another display terminal, etc., by hand. However, the movement of the virtual object is a movement from the image space of a projector to a display terminal; and the virtual object is not superimposed onto an object in real space.
Further, for example, there is a multiwindow function of a computer that is used as an interface to move a display object between multiple displays. In the multiwindow function, the virtual object moves from one screen to another screen. In the multiwindow function, the movement of the virtual object in the virtual space is performed inside the screens. Therefore, the virtual object is not perceived to move between the virtual space and real space.
Conversely, in the image display apparatus 110 according to the embodiment, a display device is used as the first display device 10 such that movement that matches the user is possible, and superimposition onto real space and the virtual space is possible. For example, a transmission-type HMD is used as the first display device 10. Such a first display device 10 is used in combination with the second display device 20; and the second display unit 21 of the second display device 20 is viewed via the display unit 11 of the first display device 10. Thereby, an image effect in which the virtual object jumps out from the virtual space into real space can be provided. Further, in the embodiment, an effect can be provided in which the virtual object displayed by the first display device 10 moves from real space into the virtual space (into the second image 21d).
Thus, in the image display apparatus 110 according to the embodiment, the virtual object can be perceived to move seamlessly between real space and the virtual space.
To display a virtual stationary display, it may be considered to use a configuration in which an immersive display such as a super wide view video see-through HMD, a Cave automatic virtual environment (CAVE), etc., is combined with an imaging device that obtains images of real space. In such a case, practical use is difficult because a massive device is necessary and a large mounting space is necessary.
In the embodiment, for example, the second display object D15 of the second operation S120 and the fourth display object D25 of the fourth operation S140 are generated based on the relative position and the relative angle between the first display device 10 and the second display device 20 that are detected by the first sensor 50. For example, the second display object D15 is generated such that the second display object D15 is perceived to be continuous with the first display object D10 of the first image 11d. For example, the fourth display object D25 is generated such that the fourth display object D25 is perceived to be continuous with the third display object D20 of the second image 21d. Thereby, the virtual object can be perceived to move between the first image 11d and the second image 21d.
At least one selected from the second operation S120 and the fourth operation S140 is implemented based on at least one selected from the movement of the virtual object and the prescribed operation by the user. Thereby, there can be less incongruity of the movement of the virtual object between the first image 11d and the second image 21d.
In the data output unit 30 as illustrated in
In the case where there is a movable image, it is determined whether or not the first display device 10 is connected to the data output unit 30 (step S330). At this time, it may be determined further whether or not the first sensor 50 is connected to the data output unit 30. For example, the peripheral devices connected to the data output unit 30 are automatically recognized by an operating system applied to the data output unit 30. In the case where peripheral devices are not connected, the image is displayed as-is by the second display device 20 (step S321).
In the case where a peripheral device is connected, a recognition flag is set that the display object is movable between the first display device 10 and the second display device 20.
Then, the first sensor 50 detects the relative position and the relative angle between the first display device 10 and the second display device 20; and the data output unit 30 acquires the information relating to the relative position and the relative angle (step S340). For example, the data output unit 30 generates the fourth display object D25 displayed by the first display unit 11 based on this information (step S341). The position and the size of the fourth display object D25 of the first display unit 11 are determined according to the relative position and the relative angle between the first display device 10 and the second display device 20.
Continuing, in the data output unit 30, it is determined whether or not the movement condition of the image of the virtual object between the first display device 10 and the second display device 20 satisfies the predetermined condition based on the information relating to the relative position and the relative angle (step S350).
As illustrated in
As illustrated in
In step S350 illustrated in
In the case where the movement condition of the image of the virtual object satisfies the predetermined condition, the first display unit 11 displays the fourth display object D25 (step S360). Then, for example, the third display object D20 is erased; or the second image 21d in which the contrast of the third display object D20 is reduced is displayed by the second display unit 21.
By such an operation, the third operation S130 and the fourth operation S140 are implemented. The operations of the first operation S110 and the second operation S120 can be implemented by interchanging the operation relating to the first display unit 11 and the operation relating to the second display unit 21 in steps S310 to S360.
The fourth display object D25 (or the second display object D15) of step S341 recited above may be generated by a computer in the cloud that is connected to the data output unit 30. In such a case, the information relating to the relative position and the relative angle between the first display device 10 and the second display device 20 detected by the first sensor 50 is supplied to the computer in the cloud via a network; and the data relating to the fourth display object D25 (or the second display object D15) is generated by the computer in the cloud based on the information. This data is supplied to the data output unit 30.
As illustrated in
The moving object MO (the fourth display object D25) displayed by the first display device 10 is generated according to the position and the angle of the first display device 10. To this end, for example, a sub-track including data relating to multi-view images is provided in the Image data IDImage data ID supplied to the data output unit 30. For example, the data used in this sub-track is generated by creating an image corresponding to the position of the first display device 10 (corresponding to the viewpoint of the human viewer 80). Or, a data set of images may be provided beforehand to a computer in the cloud; information such as the position, the angle, the angle of view, etc., of the first display device 10 may be supplied from the data output unit 30 to the computer in the cloud; and the data relating to the moving object MO may be generated by the computer in the cloud. This data is acquired by the data output unit 30; and the image is displayed by the first display device 10 based on this data. An encoding scheme such as MVC (Multi View Codec) may be used to transmit the Image data ID. In the embodiment, the method for generating the image (e.g., at least one selected from the fourth display object D25 and the second display object D15) is arbitrary.
In the embodiment, a designated object (the moving object MO) of a designated scene is perceived to move between the second display device 20 and the first display device 10. Thereby, for example, a more powerful dramatic effect can be provided.
These drawings illustrate an example of a method for generating the Image data ID.
As illustrated in
As illustrated in
As illustrated in
The position of the moving object MO when viewed via the first display device 10 (the first display unit 11) is determined from the position of the moving object MO of the imaging space coordinate system 85x, the relationship between the imaging space coordinate system 85x and the second image coordinate system 21x, and the relative position and the relative angle (the orientation) of the first display device 10 expressed using the second image coordinate system 21x recited above. By using the position of the moving object MO that is determined, projective transformation of the moving object MO (e.g., the fourth display object D25) into the first image 11d is performed.
Thus, an image corresponding to the position of the first display device 10 is generated from the images of several viewpoints in the sub-track. At this time, the moving object MO (e.g., the fourth display object D25) displayed by the first display device 10 is synchronous with the second image 21d of the second display device 20. For example, the first image 11d is pre-generated after acquiring the information relating to the relative position and the relative angle of the first display device 10 (e.g., step S341). Then, the images corresponding to the first display unit 11 and the second display unit 21 are displayed respectively by the first display unit 11 and the second display unit 21 (e.g., step S360) when the condition of the movement of the image relating to the moving object MO is satisfied (e.g., step S350).
For example, the method for transforming the coordinates recited above is applicable to the processing of the second display object D15 being displayed by the second display unit 21 by interchanging the relationship between the first display device 10 and the second display device 20.
For example, the operation recited above is applicable in the case where the image (the content) that is displayed is a two-dimensional video image. However, the embodiment is not limited thereto. An operation similar to that recited above is implemented also in the case where the content that is displayed is a three-dimensional video image. In the case where the content including the three-dimensional information is displayed, the coordinate values corresponding to the imaging space 85 when acquiring the Image data ID are known. Therefore, the projective transformation of the moving object MO can be easily implemented. Therefore, the generation of the fourth display object D25 and the second display object D15 is easy.
In the embodiment, the fourth operation S140 is implemented when the predicted position of the virtual object (the third display object D20) enters the first viewed volume 10v of the first display unit 11. In the fourth operation S140 at this time, the fourth display object D25 is displayed in the first image 11d at a position on an extension of the movement of the third display object D20 as viewed by the human viewer 80.
The second operation S120 is implemented when the predicted position of the virtual object (the first display object D10) enters the second viewed volume 20v of the second display unit 21. In the second operation S120 at this time, the second display object D15 is displayed in the second image 21d at a position on an extension of the movement of the first display object D10 as viewed by the human viewer 80.
For example, in the fourth operation S140, the temporal change of the fourth display object D25 (e.g., the temporal change of the color, the configuration, the size, etc.) is continuous with the temporal change of the third display object D20. For example, in the second operation S120, the temporal change of the second display object D15 (e.g., the temporal change of the color, the configuration, the size, etc.) is continuous with the temporal change of the first display object D10.
In other words, in the second output operation S2, the data output unit 30 outputs the first data 10d of the fourth operation S140 such that the fourth display object D25 of the fourth operation S140 is continuous with the change of the third display object D20 in the second image 21d of the third operation S130. For example, the change of the third display object D20 in the second image 21d of the third operation S130 is the temporal change of the third display object D20.
Similarly, in the first output operation S1, the data output unit 30 outputs the second data 20d of the second operation S120 such that the second display object D15 of the second operation S120 is continuous with the change of the first display object D10 in the first image 11d of the first operation S110. For example, the change of the first display object D10 in the first image 11d of the first operation S110 is the temporal change of the first display object D10.
Thereby, the incongruity of the color, the configuration, the size, the position, and the movement direction of the virtual object when the virtual object moves between the first display device 10 and the second display device 20 can be suppressed.
These drawings illustrate the fourth display object display operation S140a and the placement operation S140b of the fourth operation S140.
For example, in the state in which the third display object D20 (e.g., an image of merchandise) is displayed in the second image 21d of the second display unit 21 as illustrated in
Subsequently, as illustrated in
In the fourth display object display operation S140a illustrated in
For example, the position of the hand (the body 82) of the user in three-dimensional space is detected by the first sensor 50. Or, a second sensor 60 may be provided separately from the first sensor 50; and the position of the hand (the body 82) of the user in three-dimensional space may be detected by the second sensor 60.
In other words, the image display apparatus 110 may further include the second sensor 60. The second sensor 60 detects the object image D70 that exists in the region around the first display device 10. For example, the object image D70 includes the image of at least a portion of the body 82 of the human viewer 80. For example, the object image D70 includes the image of the object 70 (e.g., a desk, etc.) existing in the region around the first display device 10 (around the human viewer 80).
The data output unit 30 may implement the placement operation S140b in the operation (the fourth operation S140) in which the fourth display object D25 is displayed by the first display unit 11. In the placement operation S140b, the data output unit 30 outputs the first data 10d including the information of the first image 11d to display the fourth display object D25 in the first image 11d using a reference, where the reference is the position of the object image D70 when the human viewer 80 views the object image D70 detected by the second sensor 60 via the first display unit 11.
For example, the second sensor 60 is additionally provided in the first display device 10. For example, the second sensor 60 is mounted on the first display device 10. The second sensor 60 may include, for example, an imaging device. Thereby, the position of the object image D70 (e.g., the hand, etc.) in space is determined; and the movement path of the virtual object is determined. For example, an XYZ coordinate system is established using the second display unit 21 as a reference; and the position of the hand is established using this coordinate system. For example, the fourth display object D25 can be disposed at the desired position (e.g., the position of the hand) using the information of the position of the hand by performing processing similar to the case where the content of the image has three-dimensional information.
As illustrated in
For example, when the position of the user can be detected by the first sensor 50 as illustrated in
For example, in the state in which the first display device 10 (the user) cannot be detected by the first sensor 50 as illustrated in
In the image display apparatus 110 according to the embodiment, the movement of the virtual object between the first display unit 11 and the second display unit 21 is performed by the first output operation S1 and the second output operation S2. The operation recited above is implemented such that this movement is naturally perceived. For example, the movement of the virtual object is perceived unnaturally due to the error and the time delay (the detection error) of the relative position and the relative angle between the first display device 10 and the second display device 20, the difference of the display parameters (the parameter difference) between the first display device 10 and the second display device 20, the difficulty of viewing (the visual noise) due to the background image transmitted by the first display unit 11, etc.
For example, the detection error recited above increases in the case where, for example, the movement of the head of the user (the human viewer 80) is severe. For example, the detection error increases in the case where the change of the relative position and the relative angle between the first display device 10 and the second display device 20 is large.
For example, the data output unit 30 implements at least one selected from the first output operation S1 and the second output operation S2 when the change amount of the relative position detected by the first sensor 50 is not more than a predetermined value and when the change amount of the relative angle detected by the first sensor 50 is not more than a predetermined value. The data output unit 30 does not implement the first output operation S1 and the second output operation S2 in the case where the change amount of the relative position exceeds the predetermined value or in the case where the change amount of the relative angle exceeds the predetermined value. For example, the change amount of the relative position is the change amount of the relative position within a predetermined amount of time. For example, the change amount of the relative angle is the change amount of the relative angle within a predetermined amount of time.
For example, the movement of the virtual object is implemented when the change amount of the relative position and orientation between the first display device 10 and the second display device 20 determined from the detection result of the first sensor 50 is not more than the prescribed value and is not implemented in the case where the prescribed value is exceeded.
By such an operation, the unnaturalness caused by the detection error can be suppressed.
Also, by providing a sensation of movement when moving the virtual object, the incongruity during the movement is suppressed; and the unnaturalness also can be suppressed. For example, the sound producing unit 40 is provided in the image display apparatus 110. A sound is produced by the sound producing unit 40 in at least one operation selected from the first output operation S1 and the second output operation S2 by the data output unit 30. A sound effect is produced by the sound producing unit 40 when moving the virtual object. Thereby, for example, the unnaturalness caused by the detection error can be suppressed.
In the fourth operation S140, the spatial frequency of the image in the second image 21d may be reduced. Thereby, for example, a smoke-like image effect is provided by the second display unit 21.
These drawings illustrate an example of the fourth operation S140.
As illustrated in
Such a low spatial frequency noise image D22 may be provided in the second operation S120. Thereby, for example, the unnaturalness caused by the detection error can be suppressed.
There are cases where the difference (the parameter difference) between the display parameters of the first display device 10 and the second display device 20 is large. For example, the display parameters include the color, the luminance, the resolution, the vertical:horizontal ratio of the screen, the viewing distance, etc. For example, there are cases where the color, the luminance, the resolution, the vertical:horizontal ratio of the screen, and the viewing distance are greatly different between the first display device 10 and the second display device 20.
For example, the transformation formula for the color, the luminance, the dimensions, etc., is pre-made; and the transformation formula is used when moving the virtual object. Thereby, the incongruity can be suppressed.
For example, the RGB values of the color of the second display device 20 are taken as (r0, g0, and b0); and the RGB values of the color of the first display device 10 are taken as (r1, g1, and b1). In the case where the characteristics relating to the color are linear, (r0, g0, and b0) can be expressed using the transformation of Formula 1.
By using this transformation formula, the color per pixel is determined.
For example, in the first output operation S1, the data output unit 30 can establish the color and the luminance of the second display object D15 of the output of the second data 20d of the second operation S120 by transforming the color and the luminance of the first display object D10 of the first operation S110 by a predetermined method.
Also, in the second output operation S2, the data output unit 30 can establish the color and the luminance of the fourth display object D25 of the output of the first data 10d of the fourth operation S140 by transforming the color and the luminance of the third display object D20 of the third operation S130 by a predetermined method.
Thereby, the unnaturalness caused by the parameter difference can be suppressed.
For example, when the image of the virtual object is displayed by the first display unit 11, the background image is perceived as being transparent when an image having a high luminance is displayed in the second image 21d of the second display unit 21 of the background of the first display unit 11. Thereby, visual noise occurs; and there are cases where the reality decreases even when the image of the virtual object jumps out.
As illustrated in
It can be seen from these figures that the visibility of the image and the background of the light-transmitting display depends on the difference between the luminance of the image and the luminance of the background. When the brightness of the background is higher than the brightness of the image, the image is transparent, and the background is visually confirmed. When the brightness of the background is not more than the brightness of the image, the background is not easily visually confirmed.
The occurrence of the visual noise can be suppressed by reducing the luminance of the first image 11d in the second operation S120 and by reducing the luminance of the second image 21d in the fourth operation S140.
Whether or not the background image is highly noticeable has a relationship with the spatial frequency of the image of the virtual object and the spatial frequency of the background in the case where the background is considered to be an image.
For example, in the fourth operation S140 of the second output operation S2, the luminance difference is increased by implementing at least one selected from increasing the luminance of the first image 11d and decreasing the luminance of the second image 21d. The image is caused to be in an indistinct state by reducing the display spatial frequency of the second image 21d. Thereby, the display of the second image 21d in the fourth operation S140 can be not highly noticeable.
Further, the occurrence of the visual noise can be suppressed by modifying the color of the first image 11d based on the transmittance of the background color of the second image 21d.
For example, in the fourth operation S140, the data output unit 30 can output the second data 20d including the information of the second image 21d for which at least one selected from decreasing the luminance of the second image 21d and decreasing the spatial frequency of at least a portion of the second image 21d with respect to the second image 21d of the third operation S130 is implemented. Thereby, in the fourth operation S140, the occurrence of the visual noise caused by the second image 21d being perceived via the first display unit 11 can be suppressed.
Also, in the second operation S120, the data output unit 30 can output the first data 10d including the information of the first image 11d for which at least one selected from decreasing the luminance of the first image 11d and decreasing the spatial frequency of at least a portion of the first image 11d with respect to the first image 11d of the first operation S110 is implemented. Thereby, in the second operation S120, the occurrence of the visual noise caused by the first image 11d being perceived can be suppressed.
As illustrated in
As illustrated in
In other words, the luminance of the location (the region RL) in real space corresponding to the image of the virtual object is reduced by the projector 42 to be lower than that of the other locations (the region RH). Thereby, the occurrence of the visual noise can be suppressed.
In the image display apparatus 130 according to the embodiment as illustrated in
A semi-transmissive reflection plate (a semi-transmissive reflective layer), i.e., a half mirror, is used as the first display unit 11. The light emitted from the image generation unit 12 is reflected by the concave mirror 14a; and the image is enlarged. The light reflected by the concave mirror 14a is reflected by the first display unit 11 to be incident on the eye 81 of the human viewer 80. The human viewer 80 views the image (the first image 11d) based on the light reflected by the first display unit 11. As viewed by the human viewer 80, a portion of the first image 11d overlays the shielding unit 14b. For example, the shielding unit 14b shields the light from the outside that would be incident on the eye 81 of the human viewer 80 via the first display unit 11. For example, at least a portion of the second image 21d of the second display unit 21 is shielded by the shielding unit 14b. For example, the region that shields includes a region overlaying the fourth display object D25 as viewed by the human viewer 80. Thereby, the occurrence of the visual noise can be suppressed. The optical transmittance of the shielding unit 14b may be changed.
For example, although a light-transmitting HMD may be used as the first display device 10, the embodiment is not limited thereto. For example, a light-transmitting tablet display or a video see-through tablet display may be used as the first display device 10. In such a case as well, a display that moves the virtual object between real space and the virtual space can be provided.
The second embodiment relates to a display method. For example, the display method according to the embodiment includes the processing described in regard to
In this display method, the first data 10d of the first image 11d is supplied to the first display device 10 including the optically-transmissive first display unit 11 that displays the first image 11d; and the second data 20d of the second image 21d is supplied to the second display device 20 including the second display unit 21 that displays the second image 21d. The second image 21d displayed by the second display unit 21 is viewable by the human viewer via the first display unit 11.
In this display method, at least one selected from the first output operation S1 and the second output operation S2 is implemented.
The first output operation S1 includes the first operation S110 and the second operation S120. In the first operation S110, the first data 10d is supplied to display the first display object D10 in the first image 11d. In the second operation S120 after the first operation S110, the second data 20d is supplied to display the display object (the second display object D15) based on the first display object D10 in the second image 21d at a position overlaying the first display object D10 as viewed by the human viewer 80 or at a position on an extension of the movement of the first display object D10 as viewed by the human viewer 80.
The second output operation S2 includes the third operation S130 and the fourth operation S140. In the third operation S130, the second data 20d is supplied to display the third display object D20 in the second image 21d. In the fourth operation S140 after the third operation S130, the first data 10d is supplied to display the display object (the fourth display object D25) in the first image 11d based on the third display object D20 at a position overlaying the third display object D20 as viewed by the human viewer 80 or at a position on an extension of the movement of the third display object D20 as viewed by the human viewer 80.
According to the embodiment, a display method that provides an image having a strong sense of presence and better harmony between real space and the virtual object can be provided.
In the embodiment, the first display device 10 including the light-transmitting first display unit 11, the second display device 20 including the second display unit 21, the data output unit 30, and a position and orientation sensor (the first sensor 50) are provided. The first sensor 50 detects the relative position and the relative angle between the first display device 10 and the second display device 20. The data output unit 30 supplies the image data. The images of the second display unit 21 and the first display unit 11 are generated such that the image of the virtual object displayed by one selected from the second display unit 21 and the first display unit 11 is moved into the image of the other display unit based on the movement of the virtual object or the operation by the user.
In the embodiment, a real space sensor (the second sensor 60) may be further provided to ascertain the position/orientation of the hand or leg of the user and the position/orientation of a real object in the region around the user. The display of the first display device 10 is controlled based on the operation by the user or the circumstances of the virtual object such that the virtual object displayed by the first display device 10 exists at a prescribed position in real space.
The image movement is implemented in the case where the change amounts of the relative position and the relative angle (the orientation) between the first display device 10 and the second display device 20 that are determined by the position and orientation sensor is not more than a prescribed value.
The method for transforming the color information is pre-specified such that the display color and the luminance are substantially the same for the eye that views the virtual object that moves. The image switching control is performed such that there are no inconsistencies in the size, the position in real space, and the movement direction for the virtual object that moves between the first display device 10 and the second display device 20.
According to the embodiment, an image display apparatus having a strong sense of presence and better harmony between real space and the virtual object is provided.
Hereinabove, exemplary embodiments of the invention are described with reference to specific examples. However, the embodiments of the invention are not limited to these specific examples. For example, one skilled in the art may similarly practice the invention by appropriately selecting specific configurations of components included in image display apparatuses such as first display devices, first display units, second display devices, second display units, data output units, first sensors, second sensors, etc., from known art; and such practice is included in the scope of the invention to the extent that similar effects are obtained.
Further, any two or more components of the specific examples may be combined within the extent of technical feasibility and are included in the scope of the invention to the extent that the purport of the invention is included.
Moreover, all image display apparatus practicable by an appropriate design modification by one skilled in the art based on the image display apparatuses described above as embodiments of the invention also are within the scope of the invention to the extent that the spirit of the invention is included.
Various other variations and modifications can be conceived by those skilled in the art within the spirit of the invention, and it is understood that such variations and modifications are also encompassed within the scope of the invention.
While certain embodiments have been described, these embodiments have been presented by way of example only, and are not intended to limit the scope of the inventions. Indeed, the novel embodiments described herein may be embodied in a variety of other forms; furthermore, various omissions, substitutions and changes in the form of the embodiments described herein may be made without departing from the spirit of the inventions. The accompanying claims and their equivalents are intended to cover such forms or modifications as would fall within the scope and spirit of the invention.
Number | Date | Country | Kind |
---|---|---|---|
2012-037502 | Feb 2012 | JP | national |