The present invention relates to multi-view, and more particularly, relates to a method and an apparatus for providing mono-vision in a multi-view system.
Providing stereo-vision (or three dimensional, 3D) without glasses is very important for users/viewers. So Multi-view technology is adopted for many 3D TV sets, enabling one or more person to watch a 3D movie without need of wearing glasses. A lenticular lens is an array of magnifying lenses, designed so that when viewed from slightly different angles, different images are magnified. A number of manufactures are developing auto-stereoscopic high definition 3D televisions, using lenticular lens systems to avoid the need for special glasses. This technology puts lenticular lens screen on the top of a LCD which can display an image that is comprised of two or more images of the same scene captured by two more cameras from different viewpoints. Since the image is placed in the focal plane of the lenticular lens, different views are refracted only at fixed angles. The lenticular lens of the 3D TV set refracts the left perspective view of a scene to a person's left eye and the right perspective view of the same scene to the right eye so that the person can have a stereoscopic vision.
In a multi-view stereoscopic display, a lenticular screen is placed on the top of an LCD as in two-view display described above. But in this case the LCD is located at the focal plane of lenses as shown in
Thus, it's desired a method, which allows different viewers to use these kinds of 3D TV set at the same time. For example, one viewer can sit at comfortable position to see mono-vision while others can also sit at the comfortable position to see stereo-vision.
According to an aspect of present invention, it is provided a method for providing mono-vision in a multi-view system comprising at least two views of the same scene and at least one viewing zone, wherein, each viewing zone is provided with two views, and the at least two views are arranged in a way that two adjacent views among the at least two views are provided to each viewing zone so as to provide stereo-vision. The method comprises the steps of receiving an instruction requesting a viewing zone to provide mono-vision; and rearranging the at least two views in a way that two views provided to the viewing zone that provides mono-vision are of the same view as one among the at least two views.
According to another aspect of present invention, it is provided an apparatus for providing mono-vision in a multi-view system comprising at least two views of the same scene and at least one viewing zone, wherein, each viewing zone is provided with two views, and the at least two views are arranged in a way that two adjacent views among the at least two views are provided to each viewing zone so as to provide stereo-vision. The apparatus comprises n input module 1201 used to receive at least two views of the same scene; an instruction module 1202 used to receive an instruction for requesting to provide mono-vision for a viewing zone among the at least one viewing zone and pass the instruction to a pixel rearrangement module 1203; the pixel rearrangement module 1203 used to, upon receiving instruction for requesting to provide mono-vision for a selected viewing zone, perform rearrangement for the at least two views in a way that two views provided to the selected viewing zone are of the same view as one among the at least two views.
According to the aspect of present invention, it allows a viewer to watch mono-vision in a multi-view system which originally provides stereo-vision.
It is to be understood that more aspects and advantages of the invention will be found in the following detailed description of the present invention.
The accompanying drawings, which are included to provide a further understanding of the, illustrate embodiments of the invention together with the description which serves to explain the principle of the invention. Therefore, the invention is not limited to the embodiments. In the drawings:
An embodiment of the present invention will now be described in detail in conjunction with the drawings. In the following description, some detailed descriptions of known functions and configurations may be omitted for clarity and conciseness.
In the step 601, the display receives an instruction requesting changing the vision of the viewing zone, e.g. viewing zone 1 as shown in the
In the step 602, the display copies content of one view of the two adjacent views, which correspond to the viewing zone, to the other view. In this example, the view 1 and the view 2 correspond to the viewing zone 1. So the display can either copy content of view 1 to view 2 or copy content of view 2 to view 1.
The method as shown in the
Assuming there are total of N-views, a viewer sits in the viewing zone M, and the viewing zone M corresponds to view m and view m+1. The pseudo-code for changing the vision of the viewing zone M from stereo-vision to mono-vision without affecting other viewers' viewing experience is as follow:
According to the present embodiment, it is the display that performs the pixel rearrangement for changing the vision of a viewing zone from stereo-vision to mono-vision. According to a variant, the processing module for performing pixel rearrangement can locate in an independent device other than the display.
According to a variant, the viewing zone of mono-vision can change as the viewer moves in front of the stereoscopic display. For example, the technologies of camera based pattern recognition or IR (infrared) based time of flight (TOF) can be used to detect in which viewing zone the viewer is. And correspondingly, the current viewing zone where the viewer stays is changed from stereo-vision to mono-vision, and the previous viewing zone where the viewer stayed is changed from mono-vision to stereo-vision. Regarding the IR based TOF, the basic principle involves sending out a signal and measuring a property of the returned signal from a target. The distance is obtained via multiplication of the time of flight and the velocity of the signal in the application medium. Another improvement for the TOF technique is to measure the encoded infrared waves phase shift for calculating the distance between object and light source.
When a viewer turns on the stereoscopic TV (the video show as stereo at default mode), head tracker system stays asleep for power saving. After the viewer pushes a button in the remote instructing to change to mono-vision, the TV wakes up the head tracker system to detect in/to which viewing zone the viewer stays/moves. After the TV receives the viewing zone information, it will perform pixel rearrangement correspondingly by using methods as described above. In addition, if the viewer does not move then the head tracker system will turn to sleep mode for power saving.
For the head tracking system, we can use some existing technologies, such as depth sensor and knowledge-based recognition, to detect the position information of the viewer relative to the stereoscopic TV. For example, Microsoft uses PRIMESENSE depth sensor (it uses light coding technology) in the Kinect system. The depth sensor can generate good depth image, which can provide geometry information directly, so it is easy to substrate the background. But the system will not sense with the texture of the face and orientation of the head. Knowledge-based recognition technology is robustness and recovery quickly, because it depends on finding a good match in the training set, rather than performing a local search in parameter space. Use these existing technologies, the head tracking system can get the position of the head. The method for detecting the viewing zone where the viewer stays is introduced below.
Here, because the system knows that it was ZoneM where the viewer stayed and the viewer has moved (m=X/6.5) zone, the system can calculate the current viewing zone where the viewer stays, i.e. ZoneM+m. And correspondingly, the display performs pixel rearrangement based on current viewing zone.
A number of implementations have been described. Nevertheless, it will be understood that various modifications may be made. For example, elements of different implementations may be combined, supplemented, modified, or removed to produce other implementations. Additionally, one of ordinary skill will understand that other structures and processes may be substituted for those disclosed and the resulting implementations will perform at least substantially the same function(s), in at least substantially the same way(s), to achieve at least substantially the same result(s) as the implementations disclosed. Accordingly, these and other implementations shall fall in the scope of the invention.
Filing Document | Filing Date | Country | Kind | 371c Date |
---|---|---|---|---|
PCT/CN2010/002193 | 12/29/2010 | WO | 00 | 6/28/2013 |