Embodiments described herein relate generally to an image processing device, a method, a computer program product and a stereoscopic image display device.
A stereoscopic image display device enables a viewer to view stereoscopic images with the unaided eye without having to use special glasses. In such a stereoscopic image display device, a plurality of images having different viewpoints is displayed and the light beams coming out from those images are separated using a spectroscopic element such as a parallax barrier or a lenticular lens. Then, the separated light beams are guided to both eyes of the viewer. If the viewing position of the viewer is appropriate, it becomes possible for the viewer to recognize a stereoscopic image. The area of viewing positions within which a stereoscopic image can be recognized by the viewer is called a visible area.
However, such a visible area is only a limited area. That is, for example, there exists a reverse visible area that includes viewing positions at which the viewpoints of images perceived by the left eye are on the right-hand side relative to the viewpoints of images perceived by the right eye, thereby leading to a condition in which stereoscopic images cannot be recognized in a correct manner. For that reason, in a glasses-free stereoscopic image display device, it is difficult for the viewer to view satisfactory stereoscopic images.
A general architecture that implements the various features of the invention will now be described with reference to the drawings. The drawings and the associated descriptions are provided to illustrate embodiments of the invention and not to limit the scope of the invention.
In general, according to one embodiment, an image processing device comprising an observing unit and a generating unit. The observing unit is configured to obtain an observation image by observing a viewer which views a display unit. The display unit is capable of displaying a stereoscopic image. The generating unit is configured to generate a presentation image in which visible area is superimposed on the observation image by using visible area information indicating the visible area. The visible area is an area within which the viewer is able to view the stereoscopic image. A display form of the visible area changes based on a position of the viewer in a perpendicular direction to the display unit.
An image processing device 100 according to a first embodiment can be suitably implemented in a TV or a PC that enables a viewer to view stereoscopic images with the unaided eye. Herein, a stereoscopic image points to an image that contains a plurality of parallax images having parallaxes with each other.
The image processing device 100 generates a presentation image in which a real-space area, within which viewers can stereoscopically view stereoscopic images (i.e., a visible area), is superimposed on an image for observing one or more viewers (i.e., an observation image), and presents the presentation image to the viewers. With that, it becomes possible for the viewers to easily recognize the visible area. Meanwhile, in the embodiments, an image can either be a still image or a moving image.
The observing unit 110 observes the viewers and generates an observation image that indicates the positions of the viewers within the viewing area. Herein, the viewing area points to the area from which the display surface of the display unit 130 is viewable. The position of a viewer within the viewing area points to, for example, the position of that viewer with respect to the display unit 130.
In the first embodiment, the observing unit 110 can be a visible camera, an infrared camera, a radar, or a sensor. However, in the case of using a sensor as the observing unit 110, it is not possible to directly obtain an observation image. Hence, it is desirable to generate an observation image using CG (Computer Graphics) or animation.
The presentation image generating unit 120 generates a presentation image by superimposing visible area information on the observation image. Herein, the visible area information indicates the distribution of visible areas in the real space. In the first embodiment, the visible area information is stored in advance in a memory medium such as a memory (not illustrated) in the image processing device 100.
More particularly, based on a person position, which is position information indicating the positions of viewers, and based on the visible area information; the presentation image generating unit 120 generates a presentation image in which relative position relationship between each viewer and the visible area is superimposed on an observation image. Herein, the relative position relationship between a viewer and the visible area indicates whether that viewer who is captured in the observation image is present within the visible area or is present outside the visible area. In the first embodiment, the person position is stored in advance in a memory medium such as a memory (not illustrated) in the image processing device 100.
Moreover, in the first embodiment, the top-left corner of an observation image is considered as the origin, the horizontal direction is set as the x-axis, and the vertical direction is set as the y-axis. However, the method of coordinate setting is not limited to this method.
In the real space, the center of the display surface of the display unit 130 is considered as the origin, the horizontal transverse direction is set as the X-axis, the vertical direction is set as the Y-axis, and the normal direction of the display surface of the display unit 130 is set as the Z-axis. However, the method of coordinate setting in the real space is not limited to this method. Thus, under assumption of the description given above, the position of i-th viewer is represented as Pi(Xi, Yi, Zi).
Explained below are the details regarding the visible area information.
In the example illustrated in
The presentation image generating unit 120 generates a presentation image by merging, that is, superimposing the visible area information illustrated in
In the visible area information illustrated in
Meanwhile, in the example illustrated in
Based on the visible area information and the range of observation image, the presentation image generating unit 120 generates a presentation image at the distance Z1 in the following manner. In the example of the visible area information illustrated in
Alternatively, the presentation image generating unit 120 can generate a presentation image by mirror-reversing an image in which the visible area is superimposed on the observation image. That is, the presentation image generating unit 120 can convert the presentation image into a mirror image (i.e., an image that is recognized as if the viewer is reflected in a mirror). With that, the viewer becomes able to see his or her mirror image containing the visible area information. Hence, the viewer can instinctively get to know whether he or she is present within the visible area range.
In the example of the presentation image illustrated in
Thus, as long as the display format enables the viewer to distinguish between the inside of the visible area and the outside of the visible area, it is possible to implement any method. That is, a presentation image can be generated in which the area on the inside of the visible area is displayed in the abovementioned display format.
Meanwhile, in the case when a plurality of viewers is present, the presentation image generating unit 120 according to the first embodiment refers to the position information of each of the plurality of viewers and refers to the visible area information; and generates, for each viewer, a presentation image in which the relative position relationship between that viewer and the visible area is superimposed on the observation image. That is, for each viewer, the presentation image generating unit 120 generates a presentation image that indicates whether the viewer captured in the observation image is present within the visible area or is present outside the visible area.
As illustrated in
In an identical manner, as illustrated in
For that reason, when a plurality of viewers is present, the presentation image generating unit 120 according to the first embodiment generates one or more presentation images using the visible area information in the neighborhood of the distance in the Z-axis direction (i.e., Z-coordinate position) of each viewer. As a result, the actual position of a viewer inside or outside the visible area is matched with the position indicated in the presentation images.
More particularly, when a plurality of viewers is present, the presentation image generating unit 120 refers to the Z-coordinate position from the person position of each viewer; obtains the visible area range at each Z-coordinate position from a visible area information map, that is, obtains the visible area position and the visible area width at each Z-coordinate position; and generates, for each viewer, presentation information that indicates the existence position of that viewer inside or outside the visible area.
Following are some exemplary methods for generating such presentation information. For example, as illustrated in
In this case, it is desirable to configure the presentation image generating unit 120 to give notice about the viewer to whom the presentation image at a particular timing corresponds. For example, a display format can be adopted in which the viewer corresponding to the currently-displayed presentation image is colored with a given color or is marked out; or a display format can be adopted in which the viewers not corresponding to the currently-displayed presentation image are not marked with a given color or are filled with black color.
Alternatively, as illustrated in
Still alternatively, as illustrated in
Meanwhile, the presentation image generating unit 120 can also be configured to superimpose other visible area information on a presentation image. For example, the presentation image generating unit 120 can be configured to superimpose, on a presentation image, the manner of distribution of parallax images in the real space.
Returning to the explanation with reference to
In the case of configuring the display unit 130 to be capable of displaying presentation images as well as stereoscopic images, a lenticular lens functioning as a display as well as a light beam control element can be used as the display unit 130. Moreover, the display unit 130 can be installed in an operating device such as a remote controller, and can display presentation images (described later) independent of stereoscopic images. Alternatively, the display unit 130 can be configured as a display unit of the handheld devices of viewers so that presentation images can be sent to the handheld devices and displayed thereon.
Explained below with reference to a flowchart illustrated
Firstly, the observing unit 110 observes the viewers and obtains an observation image (Step S11). Then, the presentation image generating unit 120 obtains visible area information and person positions, which indicate the position coordinates of the viewers, from a memory (not illustrated) (Step S12).
Subsequently, the presentation image generating unit 120 performs mapping of the person positions onto the visible area information (Step S13), and gets to know the number of viewers and the position of each viewer in the visible area information.
Then, the presentation image generating unit 120 calculates, from the visible area information, the visible area position and the visible area width at the Z-coordinate position of a person position (i.e., at a distance in the Z-axis direction) (Step S14). Subsequently, the presentation image generating unit 120 sets the size of the angle of view of the camera at the Z-coordinate position of that person position to be the image size of the presentation image (Step S15).
Then, based on the visible area position and the visible area width at the Z-coordinate position of that person position, the presentation image generating unit 120 generates a presentation image by superimposing, on the observation image, information indicating whether the corresponding viewer is inside the visible area or outside the visible area (Step S16). Subsequently, the presentation image generating unit 120 sends the presentation image to the display unit 130, and the display unit 130 displays the presentation image (Step S17). For example, the display unit 130 can display the presentation image in some portion of the display screen. Moreover, the display unit 130 can display the presentation image in response to a signal received from an input device (such as a remote controller) (not illustrated). In this case, the input device can be equipped with button for issuing an instruction to display a presentation image.
The presentation image generating operation and the display operation from Step S14 to Step S17 are repeatedly performed for a number of times equal to the number of viewers obtained at Step S13. Herein, the generation and display of presentation images of a plurality of viewers is performed according to the display format illustrated in
In this way, in the first embodiment, a presentation image is generated in which whether a viewer is present inside the visible area or outside the visible area specified in the visible area information is superimposed on a viewer-by-viewer basis on an observation image that is obtained by observing the viewers. Then, the presentation image is displayed to the viewers. Hence, each of a plurality of viewers can get to know whether he or she is present inside the visible area or outside the visible area, and becomes able to view satisfactory stereoscopic images without difficulty.
Meanwhile, in the first embodiment, the explanation is given for a case in which a presentation image is displayed on the display unit 130. However, that is not the only possible case. Alternatively, for example, a presentation image can be displayed on a presentation device (such as a handheld device or a PC) (not illustrated) that is connectible to the image processing device 100 via a wired connection or a wireless connection. In this case, the presentation image generating unit 120 sends a presentation image to the presentation device, and then the presentation device displays that presentation image.
Meanwhile, it is desirable that the observing unit 110 is installed inside the display unit 130 or is attached to the display unit 130. However, alternatively, the observing unit 110 can also be installed independent of the display unit 130 and can be connected to the display unit 130 via a wired connection or a wireless connection.
In a second embodiment, not only a presentation image explained in the first embodiment is displayed but also presentation information, which indicates a recommended destination that enables a viewer to move to a position within the visible area, is generated and displayed.
The recommended destination calculating unit 1123 obtains, based on the person positions of viewers and the visible area information, recommended destinations that indicate positions from which stereoscopic images can be viewed in a satisfactory manner. More particularly, it is desirable that the recommended destination calculating unit 1123 performs mapping of the person positions of existing viewers onto a map of visible area information (see
As a result, for example, as the recommended destination, the recommended destination calculating unit 1123 can obtain the left-hand direction, the right-hand direction, the upward direction, or the downward direction in which the viewer should move from the current position.
The presentation information generating unit 1121 generates presentation information that contains the information indicating the recommended destination calculated by the recommended destination calculating unit 1123. Herein, the presentation information generating unit 1121 can generate the presentation information by appending or superimposing the presentation image generated by the presentation image generating unit 120 to the presentation information; or can generate the presentation information separately from the presentation image.
In an identical manner to the first embodiment, the presentation information generating unit 1121 sends the presentation information, which is generated in the manner described above, to the display unit 130; and the display unit 130 displays the presentation information to the viewers. In the case when the presentation information is generated separately from the presentation image, the display unit 130 can display the presentation information separately from the presentation image in, for example, some portion of the display. Alternatively, the display unit 130 can be configured to be a dedicated display device for displaying the presentation information.
Regarding the generation of presentation information by the presentation information generating unit 1121 using the recommended destination, the following description can be given.
For example, as illustrated in
As another example, as illustrated in
As still another example, as illustrated in
As still another example, as illustrated in
As still another example, as illustrated in
Meanwhile, in addition to displaying the recommended destination as the presentation information on the display unit 130, the configuration can be such that the viewer is notified about the recommended destination via an audio output.
Explained below with reference to a flowchart illustrated in
Once the presentation image is generated, the recommended destination calculating unit 1123 implements the method described above and calculates the recommended destination by referring to the visible area information and the person positions of the viewers (Step S37). Then, the presentation information generating unit 1121 generates the presentation information that indicates the recommended destination (Step S38). Herein, the presentation information is generated by implementing one of the methods described above with reference to
During the operation for generating and displaying the presentation image and the presentation information, Step S14 to Step S39 are repeatedly performed for a number of times equal to the number of viewers obtained at Step S13.
In this way, in the second embodiment, in addition to displaying a presentation image as described in the first embodiment; presentation information, which indicates a recommended destination that enables viewers to move to positions within the visible area, is generated and displayed. As a result, in addition to the effect achieved in the first embodiment, each of a plurality of viewers can easily understand his or her destination inside the visible area. As a result, it becomes possible to view satisfactory stereoscopic images without difficulty.
In a third embodiment, depending on the visible area information and the person positions of viewers, it is determined whether or not to display the presentation information. Only when it is determined to display the presentation information, then the presentation information is generated and displayed.
The person detecting/position calculating unit 1940 detects, from the observation image generated by the observing unit 110, a viewer present within the viewing area and calculates person position coordinates that represent the position coordinates of that viewer in the real space.
More particularly, when the observing unit 110 is configured with a camera, the person detecting/position calculating unit 1940 performs image analysis of the observation image captured by the observing unit 110, and detects the viewer and calculates the person position. In contrast, when the observing unit 110 is configured with, for example, a radar; the person detecting/position calculating unit 1940 can be configured to perform signal processing of the signals provided by the radar, and to detect the viewer and calculate the person position. As far as the detection of a viewer performed by the person detecting/position calculating unit 1940 is concerned, it is possible to detect an arbitrary detection target such as the face, the head, the entire person, or a marker that enables detection of a person. Moreover, the detection of viewers and the calculation of person positions are performed by implementing known methods.
The visible area determining unit 1950 refers to the person positions of viewers as calculated by the person detecting/position calculating unit 1940 and determines the visible area from the person positions of viewers. Herein, it is desirable that the visible area determining unit 1950 sets the visible area determining method in such a way that as many viewers as possible are included in the visible area. Moreover, the visible area determining unit 1950 can set the visible area in such a way that particular viewers are included in the visible area without fail.
The display image generating unit 1960 generates a display image according to the visible area determined by the visible area determining unit 1950.
Given below is the explanation regarding controlling of the visible area.
b) illustrates a condition in which the clearance gap between the pixels of a display image and an aperture such as a lenticular lens is reduced so as to shift the visible area forward. In contrast, if the clearance gap between the pixels of a display image and an aperture such as a lenticular lens is increased, the visible area shifts backward.
c) illustrates a condition in which a display image is shifted to the right-hand side so that the visible area shifts to the left-hand side. In contrast, if a display image is shifted to the left-hand side, the visible area shifts to the right-hand side. With such simple operations, it becomes possible to control the visible area.
Consequently, the display image generating unit 1960 can generate a display image according to the visible area that has been determined.
The presentation determining unit 1925 determines whether or not to generate presentation information based on the person positions of the viewers and based on the visible area information. The presentation information mainly fulfills the role of supporting the viewers who are not present within the visible area to move inside the visible area. As an example, following can be the criteria for which the presentation determining unit 1925 determines that the presentation information is not to be generated.
For example, when the person positions of all viewers are present within the visible area, or when the person positions of particular viewers are present within the visible area, or when a two-dimensional image is being displayed on the display unit 130, or when a viewer instructs not to display the presentation information; the presentation determining unit 1925 determines that the presentation information is not to be generated.
Herein, a particular viewer points to a viewer who is registered in advance, or who possesses a remote controller, or who has different properties than the other viewers.
The presentation determining unit 1925 performs such determination by identifying the viewers or detecting a remote controller using a known image recognition operation or using detection signals from a sensor. The instruction by a viewer not to display the presentation information is input by operating a remote controller or a switch. The presentation determining unit 1925 is configured to detect the event of operation input and accordingly determine that an instruction not to display the presentation information has been issued by a viewer.
As an example, following can be the criteria for which the presentation determining unit 1925 determines that the presentation information is to be generated.
For example, when a particular viewer is not present within the visible area, or when viewing of stereoscopic images is started, or when a viewer has moved, or when there is an increase or decrease in the number of viewers, or when a viewer instructs to display the presentation information; the presentation determining unit 1925 determines that the presentation information is to be generated.
At the start of the viewing of stereoscopic images, particularly the stereoscopic viewing condition of the viewers is not clear. Hence, it is desirable to present the presentation information. Moreover, when a viewer moves, the stereoscopic viewing condition of that viewer undergoes a change. Hence, it is desirable to present the presentation information. Furthermore, when there is an increase or decrease in the number of viewers, particularly the stereoscopic viewing condition of the newly-added viewers is not clear. Hence, it is desirable to present the presentation information.
The presentation information generating unit 1121 generates the presentation information when the presentation determining unit 1925 determines that the presentation information is to be generated.
Explained below with reference to a flowchart illustrated in
Firstly, the observing unit 110 observes the viewers and obtains an observation image (Step S11). Then, the visible area determining unit 1950 determines the visible area information, and the person detecting/position calculating unit 1940 detects the viewers and determines the person positions (Step S12).
Subsequently, the presentation image generating unit 120 performs mapping of the person positions onto the visible area information (Step S13), and gets to know the number of viewers and the position of each viewer in the visible area information.
Then, from the visible area information and the person positions, the presentation determining unit 1925 determines whether or not to present the presentation information by implementing the abovementioned determination method (Step S51). If it is determined that the presentation information is not to be generated (no presentation at Step S51), then that marks the end of the operations without generating and displaying the presentation information and the presentation image. However, in this case, the configuration can be such that only the presentation image is generated and displayed.
On the other hand, at Step S51, if it is determined that the presentation information is to be generated (presentation at Step S51), then the system control proceeds to Step S14. Subsequently, in an identical manner to the second embodiment, the presentation image and the presentation information are generated and displayed (Steps S14 to S39).
In this way, in the third embodiment, whether or not to display the presentation information is determined based on the visible area information and the person positions of the viewers. If it is determined that the presentation information is to be displayed, the presentation information is generated and displayed. Hence, in addition to the effect achieved in the second embodiment, the convenience for the viewers is enhanced and it becomes possible to view satisfactory stereoscopic images without difficulty.
Thus, according to the first to third embodiments, it becomes possible for a viewer to easily recognize whether his or her current viewing position is within the visible area. As a result, the viewer can view satisfactory stereoscopic images without difficulty.
Meanwhile, an image processing program executed in the image processing devices 100, 1100, and 1900 according to the first to third embodiments is stored in advance in a ROM as a computer program product.
Alternatively, the image processing program executed in the image processing devices 100, 1100, and 1900 according to the first to third embodiments can be recorded in the form of an installable or executable file in a computer-readable recording medium such as a CD-ROM, a flexible disk (FD), a CD-R, and a DVD (Digital Versatile Disk).
Still alternatively, the image processing program executed in the image processing devices 100, 1100, and 1900 according to the first to third embodiments can be saved as a downloadable file on a computer connected to a network such as the Internet or can be made available for distribution through a network such as the Internet.
Meanwhile, the image processing program executed in the image processing devices 100, 1100, and 1900 according to the first to third embodiments contains a module for each of the abovementioned constituent elements (the observing unit, the presentation image generating unit, the presentation information generating unit, the recommended destination calculating unit, the presentation determining unit, the display unit, the person detecting/position calculating unit, the visible area determining unit, and the display image generating unit) to be implemented in a computer. As the actual hardware, for example, a CPU (processor) reads the image processing program from the abovementioned ROM and runs it such that the program is loaded in a main memory device. As a result, the module for each of the abovementioned constituent elements is loaded in a main memory device. As a result, the observing unit, the presentation image generating unit, the presentation information generating unit, the recommended destination calculating unit, the presentation determining unit, the display unit, the person detecting/position calculating unit, the visible area determining unit, and the display image generating unit are generated in the main memory device.
While certain embodiments of the inventions have been described, these embodiments have been presented by way of example only, and are not intended to limit the scope of the inventions. Indeed, the novel methods and systems described herein may be embodied in a variety of other forms; furthermore, various omissions, substitutions and changes in the form of the methods and systems described herein may be made without departing from the spirit of the inventions. The accompanying claims and their equivalents are intended to cover such forms or modifications as would fall within the scope and spirit of the inventions.
Moreover, the various modules of the systems described herein can be implemented as software applications, hardware and/or software modules, or components on one or more computers, such as servers. While the various modules are illustrated separately, they may share some or all of the same underlying logic or code.
While certain embodiments have been described, these embodiments have been presented by way of example only, and are not intended to limit the scope of the inventions. Indeed, the novel embodiments described herein may be embodied in a variety of other forms; furthermore, various omissions, substitutions and changes in the form of the embodiments described herein may be made without departing from the spirit of the inventions. The accompanying claims and their equivalents are intended to cover such forms or modifications as would fall within the scope and spirit of the inventions.
This application is a continuation of International Application No. PCT/JP2011/057546, filed on Mar. 28, 2011, which designates the United States; the entire contents of which are incorporated herein by reference.
Number | Date | Country | |
---|---|---|---|
Parent | PCT/JP2011/057546 | Mar 2011 | US |
Child | 14037701 | US |