The present invention contains subject matter related to Japanese Patent Application JP 2005-236034 filed in the Japanese Patent Office on Aug. 16, 2005, the entire contents of which being incorporated herein by reference.
1. Field of the Invention
The present invention relates to a method of displaying pictures, a program for displaying pictures, a recording medium holding the program, and a display unit, all the method, program, medium, and unit being suitable for the enhancement of our feelings as if we were in real scenes when we watch programs provided on TV etc.
2. Description of the Related Art
Various methods of processing picture signals to improve our feelings as if we were in real scenes when we watch programs provided on TV etc. have been proposed so far. For example, disclosed in Japanese Patent Unexamined Publication No. Hei 9-81746 is a method of detecting, from pictures, the distance to an object and processing picture signals on the basis of the detected distance and in accordance with the movement of the user's viewpoint.
Incidentally, in one's real life, the focal point of one's eye changes as one's viewpoint moves and, hence, one can continue to focus on an object at changing distance. Besides, one can perceive the changing distance to the object from the change of focal distance of one's eyes and three-dimensional view through one's both eyes.
On the other hand, some programs on TV include both near and far scenes, but any scene is shown on a display unit at a fixed distance from the viewer.
As described above, the related art fails to provide viewers of programs provided on TV etc. with such perspective, or depth perception, as they have in their real life; therefore, it is difficult for the viewers to feel as if they were in real scenes when they watch such programs.
In view of the above problem, there is a need for providing a method of displaying pictures, a program for displaying pictures, a recording medium holding the program, and a display unit, all the method, program, medium, and unit being capable of enhancing our feelings as if we were in real scenes when we watch programs provided on TV etc.
According to an embodiment of the present invention, there is provided a method of classifying picture signals based on the depth values of the pictures, choosing a plurality of displays arranged at different distances from the user based on the classification, and displaying the pictures of at least one picture signal in the form of moving pictures on the chosen displays.
According to an embodiment of the present invention, there is provided another method of dividing the picture of a picture signal into pictures with time domains based on the above classification and repeatedly reproducing one of the pictures with a time domain different from the time domain of the inputted picture.
According to an embodiment of the present invention, the depth values of the pictures of the above classified picture signals are compared and displays on which the pictures are to be shown are chosen based on the result of the comparison.
According to an embodiment of the present invention, there is provided still another method of classifying picture signals based on the depth values of the pictures, displaying the pictures of the classified picture signals on a plurality of displays, and moving the displays in accordance with the depth values of the pictures.
According to an embodiment of the present invention, our feelings as if we were in real scenes when we watch programs provided on TV etc. can be enhanced.
Embodiments of the present invention will be described in detail based on the following figures, wherein:
By referring to the attached drawings, preferred embodiments of the display system of the present invention will be described bellow.
The display system according to an embodiment of the present invention estimates the depth values of pictures based on picture signals, classifies pictures based on the estimated depth values, and shows the pictures of different depth values on displays 1A, 1B, and 1C arranged at different distances from the user as shown in
If two displays 14a and 14b are put at the same distance from a user as shown in
The picture sources 11 are, for example, TV tuners, DVD (Digital Versatile Disc) players, video tape recorders, and so on. With the above construction, the display system 10 provides users with various picture contents supplied from the picture sources 11.
As shown in
As shown in
As shown in
As shown in
The depth-determining/classifying unit 12B uses the depth-defining tables shown in
By referring to the flowchart of
When picture signals V are supplied to the depth classifiers 12, they begin to process the picture signals V. In Step S11, the motion-vector detector 61 of the characteristics-extracting unit 12A of each depth classifier 12 detects motion vectors in macro-blocks. The macro-block is the unit for the detection of motion vectors.
In Step S11, the motion-vector detector 61 detects a different motion vector in the macro-block wherein the object exists from motion vectors in other macro-blocks if an object is moving in its background. On the other hand, if a picture of a certain field is taken and there is no movement in the picture, the motion-vector detector 61 detects motion vectors with values of zero in all the macro-blocks in the picture.
Further, in Step S11, if the TV camera is being panned, the motion-vector detector 61 detects the same motion vectors in accordance with the panning direction and velocity in all the macro-blocks as shown in
Also, in Step S11, if a picture of a certain field is taken and a person “A” is moving horizontally in the picture, the motion-vector detector 61 detects motion vectors in accordance with the movement of the person “A” in the macro-blocks wherein the person “A” exists and motion vectors with values of zero in all the macro-blocks constituting the background of the person “A,” as shown in
In Step S12, the movement finder 62 finds whether there is even one macro-block with a motion vector over a certain threshold value in the picture or not to determine whether there is movement in the picture or not.
If the movement finder 62 determines in Step S12 that there is movement in the picture, the processing advances to Step S13. In Step S13, the camera's movement finder 63 determines whether the camera is moving or not by determining the motion vectors of the macro-blocks constituting the background are of a certain threshold value or not. If the movement finder 62 determines in Step S12 that there is no movement in the picture, the processing advances to Step S16.
If the camera's movement finder 63 determines in Step S13 that the camera is moving, the processing advances to Step S14. In Step S14, the static-part finder 64 calculates the area of the object by calculating the number of macro-blocks with motion vectors of different values from the values of motion vectors of the background macro-blocks. Then, the processing advances to Step S19.
If the camera's movement finder 63 determines in Step S13 that the camera is not moving, the processing advances to Step S15. In Step S15, the dynamic-part finder 65 calculates the area of the object by calculating the number of the macro-blocks wherein movement is detected. Then, the processing advances to Step S19.
In Step S16, the bust shot/close-up finder 66 determines whether the person “A” is a bust shot as shown in
If the bust shot/close-up finder 66 determines in Step S16 that the picture of the person “A” is of a certain magnitude, the processing advances to Step S19. A bust shot and a close-up of a person can be regarded as pictures taken by a camera at a relatively short distance from the person. Accordingly, if the distance to the person “A” cannot be determined from the motion vectors, the bust shot/close-up finder 66 calculates the area of the face of the person “A” to determine whether the picture of the person “A” was taken by the camera at a relatively short distance from the person “A” or not.
If the bust shot/close-up finder 66 determines in Step S16 that the picture of the person “A” is not of a certain magnitude, the processing advances to Step S17. In Step S17, the color histogram developer 67 divides the picture under processing into equal sections and makes a color histogram for each section to grasp the distribution of colors in the picture. The color histogram developer 67 may divide the picture horizontally into three equal sections “L,” “C,” and “R” as shown in
If the picture is of a landscape as shown in
If the picture is of a certain object as shown in
Accordingly, it can be regarded that the larger the difference between the color histograms of sections “L,” “C,” and “R” is, the shorter the distance from the camera to the object is.
After the color histogram developer 67 makes the histograms of sections “L,” “C,” and “R,” the processing S17 advances to Step S18. In Step S18, the correlation-coefficient calculator 68 calculates the coefficient of correlation among the color histograms of sections “L,” “C,” and “R” by finding the sum of absolute values of differences between (i) the frequencies of levels in the histogram of the section “C” and (ii) the frequencies of levels in the histograms of the sections “L” and “R.” Then, the processing advances to Step S19.
For the calculation of the coefficient of correlation among the color histograms of sections “L,” “C,” and “R,” the histogram of the center section “C” may be treated as the standard or the most peculiar histogram may be treated as the standard.
The characteristics-extracting unit 12A supplies, as values of characteristics, the area of the object found in Steps S14 and S15, the information on the bust shot and the close-up acquired in Step S16, the coefficient of correlation among the color histograms found in Step S18 to the depth-determining/classifying unit 12B.
In Step S19, the depth-determining/classifying unit 12B determines the depth of the picture by using (i) the values of characteristics supplied from the characteristics-extracting unit 12A and (ii) the depth-defining tables defining the depth of the object shown in
The depth-defining table of
The depth-defining table of
The depth-defining table of
In Step S19, the depth classifiers 12 supply the depth values determined by the depth-determining/classifying units 12B to the destination selector 13.
The destination selector 13 determines, based on the depth values, which picture is outputted to which display.
After receiving the depth values of pictures from the depth classifiers 12, the destination selector 13 sorts out the depth values. The depth increases in the order of D1, D2, D3, . . . , Dm. Pictures 1, 2, . . . , m are provided based on their depth values and the smaller the depth value of the picture is, the smaller the number of the picture becomes. The distance from the user to the display increases in the order of 141, 142, 143, . . . , 14n. The destination selector 13 stores the picture signals in the memory 15 and outputs the picture signals to the displays 14 sequentially. For example, the destination selector 13 outputs a picture signal with a depth value D1 to the display 14i.
As described above, the longer the distance to the object is, the deeper-side display the display system 10 of the first embodiment shows a picture on. Therefore, the user can perceive the changing distance to the object from the change of focal distance and three-dimensional view through his or her both eyes when watching a program. Thus, unlike the related art, the display system 10 can provide the enhancement of our feelings as if we were in real scenes when we watch programs provided on TV etc.
The display system 10 classifies picture signals based on the depth values of the pictures, detects motion vectors in the picture of a picture signal, and detects the area of the object according to motion vectors whose values are different from those of motion vectors of the background. Therefore, the characteristics caused by the change in the picture in the time-line direction can be detected. Further, when the characteristics cannot be detected by the motion vectors, a face portion is detected according to the shape of a skin-color portion. Based on an area of the face portion, it is possible to detect the color characteristics indicating that the distance to the object is equal to or less than a prescribed distance. Further, when such a face portion is smaller than a prescribed value, it is possible to detect the color characteristics by dividing the picture into a plurality of sections to grasp the distribution of colors in the picture and by the coefficient of correlation showing levels of differences in the color distribution.
In a second embodiment according to the present invention, there is provided a display system in which one picture source 111 supplies one picture signal V1 to a depth classifier 121.
In the display system 20, when one picture source 111 supplies one picture signal V1 to the depth classifier 121, by using the depth-defining table of
Thus, the display system 20 of the second embodiment can show a plurality of pictures from one picture signal on the displays corresponding to their depth values, respectively.
As shown in
Further, according to a third embodiment of the present invention, there is provided a display system having a picture converting unit in place of the depth classifier 12 of the display system 10 of the above first embodiment.
In the display system 30, when a picture source 111 supplies a picture signal V1 to a picture converting unit 32, the picture converting unit 32 converts the picture signal V1 into a plurality of pictures with different depth values. The destination selector 13 chooses a display 14 (display 141, 142, . . . , 14n) to which a picture is outputted according to the depth value of the picture.
By referring to the flowchart of
When the picture extracting unit 34 begins to extract a picture, in Step S21, it estimates a depth value of the inputted picture.
Then, in Step S22, the picture extracting unit 34 determines whether or not the depth value has changed from the one in the previous frame.
When the picture extracting unit 34 determines in Step S22 that the depth value in the current frame has changed from the one in the previous frame, the processing advances to Step S23. In Step S23, as shown in
On the other hand, when the picture extracting unit 34 determines in Step S22 that the depth value in the current frame has not changed from the one in the previous frame, the processing advances to Step S24 and, as shown in
In Step S25, the picture extracting unit 34 determines whether or not the depth value has changed in the next frames of another pictures y and z.
In Step S25, as shown in
On the other hand, when it is determined in Step S25 that the depth values have not changed in the next frames of another pictures y and z, the processing by the picture extracting unit 34 advances to Step S27.
In Step S27, the picture extracting unit 34 outputs a picture for each depth value and ends the picture extracting process.
Thus, in the display system 30 of the third embodiment, it becomes possible to selectively show a plurality of pictures which are different in terms of time series on the displays 14 based on the depth values in such a way that when, for example, the inputted picture is a near scene, another picture of a far scene is taken out from the memory 33 and when the inputted picture is a far scene, another picture of a near scene is taken out from the memory 33.
For example, as shown in
Further, according to a fourth embodiment of the present invention, there is provided a display system having a moving control unit in place of the destination selector 13 of the above first embodiment.
In the display system of the fourth embodiment, as shown in
As shown in
For example, as shown in
Further, the present invention is not limited to the embodiments described above. It is need less to say that various modifications can be made without departing from the spirit of the present invention.
In the above embodiment, a series of processing programs are preinstalled in the system. However, instead of providing the program by pre-installation, the program may be provided through networks such as the Internet by means of downloading. Alternatively, the program may be provided through various storage media. Storage media which can be used for this purpose are optical discs such as CD-ROMs and DVDs, magnetic discs such as floppy (registered trademark) discs, removable hard disk drives which are integrally formed with their drive mechanisms, memory cards, and so on.
In the above embodiment, the display unit having combined function blocks shows each picture on a display. However, the above display system may be provided inside the display.
The present invention can be applied, for example, to viewing a program wherein a far scene and a near scene are switched.
It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and alterations may occur depending on design requirements and other factors insofar as they are within the scope of the appended claims or the equivalents thereof.
Number | Date | Country | Kind |
---|---|---|---|
P2005-236034 | Aug 2005 | JP | national |
Number | Name | Date | Kind |
---|---|---|---|
4393400 | Ikushima et al. | Jul 1983 | A |
5581625 | Connell | Dec 1996 | A |
6940473 | Suyama et al. | Sep 2005 | B2 |
20010045979 | Matsumoto et al. | Nov 2001 | A1 |
20060050383 | Takemoto et al. | Mar 2006 | A1 |
20060132597 | Mashitani et al. | Jun 2006 | A1 |
20060204075 | Mashitani et al. | Sep 2006 | A1 |
Number | Date | Country |
---|---|---|
64 78285 | Mar 1989 | JP |
7 296185 | Nov 1995 | JP |
10 333093 | Dec 1998 | JP |
2005 65051 | Mar 2005 | JP |
4461834 | Feb 2010 | JP |
Number | Date | Country | |
---|---|---|---|
20070040904 A1 | Feb 2007 | US |