Intelligent Device with Both Recording and Playing Back 3D Movies and the Relevant Apparatus and Methods

Abstract
Present invention relates to apparatus and methods for integrating both 3D movies recording and 3D movies playing back together on intelligent devices or smart devices, wherein the method of the 3D movies recording can increase the view depth beyond human eye capability and the method and apparatus of 3D movies playing back provide large tolerance to eye motion and viewer is able to avoid the suffers from side effects of vertigo, headache, and eye fatigue; wherein intelligent devices is any one of cell-phone, PDA, iPhone/smartPhone, iPad, Google Glass, pocket/tablet PC, GPS, or eBook, laptop or notebook, computer, TV or iTV, etc.
Description
FIELD OF INVENTION

Present invention relates to apparatus and methods for integrating both 3D movies recording and 3D movies playing back together on intelligent devices (ID) or smart devices, wherein the method of the 3D movies recording can increase the 3D view depth beyond human eye capability and the method and apparatus of 3D movies playing back provide large tolerance to eye motion and viewer is able to avoid the sufferings from side effects of vertigo, headache, and eye fatigue; wherein intelligent device is any one of cell-phone, PDA, iphone/smart phone, iPad, Google Glass, pocket/tablet PC, GPS, or eBook, laptop or notebook, computer, TV or iTV, etc.


BACKGROUND OF INVENTION

Even though there is one disclosure in the prior arts related to 3D camera system on mobile phone, and there are some discourse related to 3D display on mobile phone, this invention integrates 3D recording system and 3D playing back system together on intelligent device (ID), not only including mobile phone but also including iTV, smart phone, iPad, computer monitor, etc. Prior art 1, WO2012082124 (A1) by Sony, describes a 3D camera system, disposed on a mobile communication device, comprising two camera modules movable relative to each other between two positions. However, 3D view depth is very limited. By using the embodiment of this invention, 3D view depth can be greatly increased. The other prior arts describe different variations of 3D display, and the 3D displays described are eye-glass based or glassless based which mainly used simple parallax barrier or lenticular technologies, which have no tolerance for eye motion, and so viewer is easily suffering from side effects of vertigo, headache, and eye fatigue. This invention provides large tolerance to eye motion and viewer is able to avoid the sufferings from these.












Related U.S. Patent Documents











Application Number
Filing Date
Patent/publish Number







U.S. 13/386,518
Dec. 16, 2010
U.S. 2012/0270598 A1



U.S. 13/035,021
Feb. 25, 2011
U.S. 2011/0281619 A1



U.S. 13/413,050
Mar. 6, 2012
U.S. 2012/0236406 A1



U.S. 12/433,789
Apr. 30, 2009
8,243,065



U.S. 12/770,196
Apr. 29, 2010
8,229,510



U.S. 13/013,324
Jan. 25, 2011
8,224,107



U.S. 10/334,772
Dec. 31, 2002
6,885,939













SUMMARY OF THE INVENTION

Present invention relates to intelligent device (ID) which integrates both recording and playing back 3D movies together into current intelligent/smart devices, and corresponding apparatus and methods for 3D movie recording and 3D movie playing back. A smart device is any one of cell-phone, PDA, iPhone/smart phone, iPad, Google Glass, pocket/tablet PC, GPS, or eBook, laptop or notebook, computer, TV or iTV, etc.


The apparatus for 3D movie recording includes at least one pair of cameras installed on back side of ID, or/and single camera or one camera pairs installed at front side (facing to user), and allows smart device users be able to record 3D movies or television episodes and play back on any 3D-enable display device or apparatus, including smart devices themselves, TV, computer monitors, but not limited to those.


The apparatus and method for 3D movie recording, when necessary, includes camera holder extension to increase the view depth beyond human eye capability.


The apparatus and method for playing-back 3D movie compresses display apparatus which includes image signal pixel-scanning-device and shutter grids, and compresses system and algorithm for process and control, of which detail will be described later, by which viewer is able to avoid the suffers from side effects of vertigo, headache, and eye fatigue.


With this 3D recording apparatus and built-in 3D playing back apparatus, ID users can enjoy their own-recorded 3D movies.


With this 3D recording apparatus and any other 3D-enable display ID, people can meet, or call meeting, or chat, or shop, or sell online as alive on smart phone, or on TV, or on iphone/smart phone, or on computer monitor, etc; Vendor can show product online or on the smart phone/ipad/tablet as alive, and customer can see the product details in 3D.





DETAIL DESCRIPTION OF THE INVENTION
Basic Configurations of 3D Recording Apparatus

The invented apparatus has at least one camera pair (201, 202) or (201″, 202″) installed on back side (100) (for recording), and/or another camera pair (201′ and 202′) or a single camera (201′″) installed on front side (101) (facing to user) for eye tracking if single camera, or for eye to screen distance measurement plus eye-tracking if camera pair, and for other applications such as web meet, web meeting, web chatting, web selling and buying, etc., as shown in FIG. 1. The two cameras in any camera pair can synchronically record two pictures (at each moment) of front view or back view or both views (if with two pairs) at two different angles (adjustable), like two human-being's right eye and left eye seeing the views ahead.


The two cameras in a pair are focus-adjustable simultaneously during 3D recording, for either numerical focusing or optic focusing.


The two cameras in a pair are view-angle-adjustable simultaneously before or during 3D recording.


Any camera in a pair or any pair can be located at arbitrary location, and the two cameras in a pair are not necessarily at same horizontal or vertical location (not shown in FIG. 1).


Configurationally Variations of 3D Recording Apparatus

Typical human-being eye distance is about 7 cm to 10 cm, which leads to a typical 3D view depth of about 30 feet, means human-being eye can have distance sense [“far” or “close” ] only for objects within 30 feet, for any object beyond this distance, human eye cannot tell it's distance if he/she has no knowledge about the relation of this object to the reference objects nearby, i.e. if we have some knowledge about the relation of an object to the objects nearby, we can tell the distance by our knowledge (not by view depth sense), such as, its relative looking-size [such as, if small—our brain was trained to think it is “far”, if large means “close”, because a real car cannot be the size of a toy car, and toy car cannot run on a road, so a car on a road looks small, that means it is “far”, we never think it is a toy car on the road].


For same reason, the distance of two cameras in pair should be large enough to have large enough view depth. To capture more 3D view, sometimes, we need to increase the view depth beyond human eye capability (say 3D view beyond 30 ft, 100 ft, 5000 ft, 50,000 ft, . . . ). We introduce an extension (such as 301, or 302, or 303 and 304, or 305 as shown in FIG. 2, but not limited these examples) to achieve this goal, by extending the distance of cameras in a pair.


The view depth can be extended roughly to [30 feet×(camera-distance/8 cm)]. For example, if the camera distance is 24 cm, the view depth can increases to 90 feet.


One end of extension 301 can slip out from ID to either side (right or left side, FIG. 2 only showing to the left, but it can be to the right also) of ID to adjust the camera distance, and camera is installed on one side of the extension, and the other camera in the pair installed on other side of ID.


Extension 302 can be pulled out from ID and rotate any angle to adjust the camera distance, and camera is installed one end of extension, and rotates about other end which is fixed on ID.


Extension 304 can slip out from extension 303 or extension 302, or 301, to further increase the camera distance. Camera installed in the out-side end of extension 304.


Extension 305 represents a link between off-ID camera and ID, off-ID camera is one of the cameras in a camera pair. This link could be any of optical fiber link, RF cable/wire, or RF wireless link, with recoding synchronic commands.


Movie Data Storage Method of 3D Recording

Movie data, i.e. a set of pictures, recorded by the any pair of the camera for right eye and left eye are stored in separate data files, or are stored alternately right eye picture by left eye picture in same data file, with any predefined 3D movie format, or industry standard format.


Recoding Camera Function as the Cameras for 3D Projector

The two camera in camera pair can be used as 3D projector cameras, so people can see 3D movie on ID screen or even on the wall (if with glass); People can have more convenience for alive conferences or meetings or seminar online.


Configurations and Methods of 3D Playing-Back

The apparatus for playing-back 3D movie include display apparatus and shutter apparatus. The display apparatus is picture scans device 400 or 400′ (i.e. 2D image signal is scanning over the pixels on the picture scan device, or called signal pixel scan device to distinguish with shutter pixel scan device), and shutter apparatus is an 1D or 2D shutter grids 500 or 500′ as shown in FIG. 3. The methods for playing-back 3D movie includes process, control system, and algorithms (as shown in FIGS. 4, 5, 6 and 7). There are built in two display modes: glass mode and glassless mode. The mentioned shutter is an optical switch, such as LCD, DMD, etc. but not limited to these.


For glass mode, just keep all shutter grids at on status, and display movie as same way as current industry method for displaying 3D movie. The glass switch signal is transmitted from display apparatus by either a wire connection or by wireless connection.


For the glassless mode, the apparatus and method are described below in details.


Method and Algorithm for Glassless 3D Playing-Back

There are two methods for glassless playing-back 3D movie.


(1) Method I and its Algorithm


In method I, image signal pixel-scanning-device 400 displays combined [rather than direct left eye or right eye picture] 3D movie picture from the process and control system, where the black part on device 400 indicates the picture stripe of left-eye movie for left eye 601, and the white part on pixel device 400 indicates the picture stripe of right-eye movie for right eye 602. Each signal pixel on pixel scanning device includes 3 or 4 sub-pixels for 3 or 4 colors. One picture stripe contains at least one (i.e. one or more, along vertical) signal pixel line. Shutter consists of 1-dimesional-shutter grids along vertical (i.e. column grid lines), and is divided into groups to form column grid stripes [501 shows the zoom-in view]. Each grid stripe contains at least one (i.e. one or more) grid lines controlled by synchronize signal from control system, in each of grid stripe all grid lines will simultaneously switch on or simultaneously switch off the left eye part or right eye part of combined 3D movie picture, respectively.


As shown in FIG. 4, pictures of left eye 701, 703, . . . and pictures of right eye 702, 704, . . . are split into stripes and then stripe of left-eye-picture with stripe of right-eye-picture are combined into new frames of picture 801, 802, 803, 804, . . . according to the time sequence. There are two type of frames, type L (801, 803, . . . ) with left-eye-picture stripe stating from left and type R (802, 804, . . . ) with right-eye-picture stripe stating from left, The width of picture stripes can be constant or adjustable, which will be described in detail below.


There are two ways to select the width for shutter grid stripe. First way is constant width for shutter grid stripe (denoted as p=n*pixelSeparation), but adjusting (i.e. distributing from center to edge) the width of picture stripes according to the information from the algorithm calculation 913 based on shutter grid stripe location, and viewer eye locations. The second way is constant width of picture stripe (also denoted as p=n*pixelSeparation), but adjusting (i.e. distributing from center to edge) the width for each shutter grid stripe according to the information from the algorithm calculation 913 based on picture location and viewer eye locations. Obviously, the second way has lower cost.


If the width of grid stripe in the shutter is fixed (but may have a distribution from center to edge), it is determined at setup using built-in software according geometries of the design (such as, dE, dV, dL, signal pixel size, location of this signal pixel line, . . . etc), user's eye distance and user's habits of viewing screen (far or close to screen). If width of grid stripe is dynamically adjusted, it is determined according to the dynamic situation of eye distance, eye locations, and screen distance to face, which are obtained from eye-tracking algorithm. So, this method has large tolerance to eye motion and viewer is able to avoid the sufferings from side effects of vertigo, headache, and eye fatigue.


One of the examples of control system and process algorithm (actual control system and algorithm are not limited to this example) is shown in FIG. 5. Eye is tracked by camera 911, location-tracking algorithm 912 determines the locations of eyes. Default setup is: assume eyes are at optimized location (fixed location during watch). Before watching, viewer has two choices: using automatically eye tracking, or using fixed watching location. Of course, for the latter, the tracking camera can help the viewer setting or swinging head at right location of which the information is displayed on screen before watching.


As shown in FIG. 5, movie signal is transmitted from 3D TV source 901 to stream processor and controller 902, which transmits images signal to image splitting and combination unit 904, and meanwhile generates a synchronize signal 905 for releasing buffer and shutter switching. Unit 904 splits the pictures of left and right eyes into stripes and combine the stripes into new L and R type frames, as shown in FIG. 4. Then new L and R type frames are stored in buffers 906 and 907, respectively. At the beginning of display (during first two frames), the synchronize signal 905 needs to be properly delayed (leave enough time for unit 904 finishing it's job), then is sent to trig buffer 906 to release the signal stream of frame, and to display the picture of type L frame, meanwhile, sends synchronize signal to trig the shutter to open (switch on) shutter gird stripes for L type frame. Immediately after the display for type L frame is finished, buffer 906 sends trigger to buffer 907 to release the signal stream of frame to display the picture of type R frame, meanwhile, shutter grid stripes for R type frame are switched on (shutter grid stripes for L type frame are switched off). Buffers 906 and 907 store at least one pair of L & R frames (but can store many pairs of L & R frames). After the first two frames, synchronize signal 905 trigs frame buffers and shutter grid stripes periodically, and releases the frame stream and switch shutter grid stripes on time.


Now let us talk about the theory on which the system design and algorithms are based. Let dE to be the two eye distance, dV to be the vertical distance between viewer's eye (601, 602) and shutter grids 500, dL to be the distance between pixel scanning device 400 and shutter grids 500, W to be width of display apparatus, Np to be the total horizontal pixel number such as 1920 p, etc (so W=Np*pixelSeparation), then dL should be so designed that dL, p, and dV satisfy: p=dE*dL/[dV+e*dL], where e=+1 for the first way, and e=−1 for the second way. And dL should be determined by formula: dL=n*W*dV/[dE*Np−e*n*W], where n is pixel number in p, i.e p=n*pixelSeparation (center to center). Usually, dV is proportional to W, if we assume dV=q*W, then dL=n*q*Ŵ2/[dE*Np−e*n*W]. For the first way (constant width of shutter grid stripe), the width of picture stripe should be dX=p*(dV+dL)/dV, while for the second way (constant width of picture stripe), the width of shutter grid stripe should be dX=p*dV/(dV+dL). All geometry parameters above are in inches.


To avoid the suffers from side effects of vertigo, headache, and eye fatigue, eye tracking system, in real time, measures the dV (vertical distance between viewer's eye and shutter grids), dE and eye's relative motion to the display screen center and adjust dX and locations of shutter stripes. Different user has different dE. Therefore, at beginning, the dX is adjusted to p*[(dV+dL)/dV]̂e., after that, if distance dV is changing, then the stripe width dX will be automatically adjusted according to the formula above, i.e. dX=p*[(dV+dL)/dV] ̂e. If head of user swings (to left or right) a distance dH relative to center of display screen, all shutter grid stripes (2nd way) or all picture stripes (1st way) should move dH*dL/[dV+e*dL] together. The left edge of shutter stripe is off-site from the corresponding left edge of picture stripe xLi−ip=(ip+dE/2)*dL/dV for the 1st way, or xLi−ip=−(ip+dE/2)*dL/(dV-dL) for 2nd way, where ip=i*p is the i-th pixel location. If there is only one eye-tracking camera, dH is determined from the correlation between two neighboring (in time sequence) tracking images by FFT (fast-Fourier transformation). If there are two eye-tracking cameras, dH is determined from the correlation between two neighboring tracking images (from any one of the two cameras) by FFT, and dV is determined by depth re-construction algorithm using left image from left camera and right image from right camera.


To get the formula above, the following geometry lines need to be drawn: draw three neighboring signal pixel stripes on pixel scanning device 400 (any location is OK, but for more clear geometry relation, draw them at a location near right side of scanning device 400), say . . . L R L . . . , draw 4 lines from 4 edge points of the three stripes to center of right eye respectively, and draw 4 lines from 4 edge points of the three stripes to center of left eye respectively, then these 8 lines have many cross points, but there are 4 of the cross points are most close to pixel scanning device 400, draw a line connecting these 4 cross points and drawing extension of this line to both sides, which gives the position of shutter screen, and the separation between two neighboring points of the 4 cross points gives the width of shutter pixel stripe.


For both ways, frame re-fresh rate of shutter grids is same as frame re-fresh rate of display picture. During the 2D scanning for each frame of display picture, the on/off states of shutter grid stripes is fixed. Here, we need to distinguish source picture refresh rate and display picture refresh rate. Display picture frames are obtained from two neighboring source picture frames by interpolating N frames in-between, for better-resolution looking, and display picture frames are actually displayed rather than source picture frames.


(2) Method II and its Algorithm


In the second method, as shown in FIGS. 6 and 7, the picture is not split into stripes, but shutter lines are scanned with corresponding signal pixel lines together if using 1D shutter grid, i.e. during vertical (column) scan of each pixel within each signal pixel line (column), the corresponding stripe window in the shutter are always on (i.e. all pixel lines in this window are switched on simultaneously, all other shutter lines are switched off) until end of current pixel line (column or vertical line), or start of next pixel line or column. Once the scan for whole left eye picture is finished, then switch the scan for right eye picture. If using 2D shutter grid, rectangular windows of shutter are scanned together with corresponding signal pixels, i.e. when the signal pixel at (i-th row, j-th column) is being scanned, all shutter grids in the corresponding rectangular window are switched on simultaneously, and all other shutter grids are switched off). If using 1D shutter grid, the each signal frame (if originally designed as scanning row by row) needs a so-called row to column conversion by unit 904, i.e. spitting and be re-combining in time sequence, so as to change “scanning row by row” to “scanning column by column”. However, if 2D shutter grid, there is no need of row to column conversion.


There are also two ways to scan. The first way is to set the density of shutter grids as same as designed spec, but picture pixel grid may has same or higher density than designed spec; The second way is to set the density of picture pixel grids as same as designed spec, but shutter grid may has same or higher density than designed spec, Obviously, the second way has lower cost.


In method I, stripe width almost equals stripe pitch (almost no gap). However, in method there is no stripe pitch, because only one shutter stripe window is switched on for a given picture pixel or for given picture stripe.


For the first way (shutter grids as designed spec), the theoretic center location of picture pixel or picture stripe for left eye is






xL(i)=i*pixelSeperation+[i*pixelSeperation+dE/2]*dL/dV  (left eye)






xR(i)=i*pixelSeperation+[i*pixelSeperation−dE/2]*dL/dV  (right eye)


and dL should satisfy: dL>dV*hW/[dE−hW−vR], where vR is the tolerant range for eye motion or viewer head swinging. If the width of stripe window in the shutter is fixed by setup, then a larger vR is needed at design (if the width is adjustable, vR can be very small). If there is more than one pixel line in picture stripe, the width of picture stripe is determined by dX=hW+[hW+vR]*dL/dV. If for multiple viewers with simultaneously multiple-zone-scanning to increase the brightness and reduce the bandwidth, the allowed minimum shutter window pitch should satisfies ph>[hW+(dL/dV)*(dE+vR+hW]/[1+(dL/dV)].


To get the formula above and below, the following geometry lines need to be drawn: draw one picture stripe on pixel scanning device 400′, draw one line L1 to connect right edge of the stripe with tolerant off-site-left limit of left eye, draw another line L2 to connect left edge of the stripe with tolerant off-site-right limit of left eye, then L1 and L2 have one cross point; If denote the distance from this cross point to pixel scanning device 400′ as dL0, then draw an line parallel to device 400′ and with distance 2*dL0 from device 400′. For 1st way (smaller shutter hole or window) described above, the shutter screen 500′ can be set at a distance less than 2*dL0 from device 400′. For 2nd way (large shutter hole or window) described below, the shutter screen 500′ can be set at a distance greater than 2*dL0 from device 400′, and the width of shutter stripe window hW is determined by the distance of two cross points of shutter screen 500′ with two lines L1 and L2.


For the second way (picture pixel as designed spec), the theoretic center location of shutter line or shutter stripe (if more than one lines in thr stripe) for left eye is xL(i)=[i*pixelSeperation−(xN+dE/2)*dL/dV]/(1+dL/dV), the theoretic location of picture pixel for right eye is xR(i)=[i*pixelSeperation−(xN−dE/2)*dL/dV]/(1+dL/dV), where xN is the nose center location (location of middle line between eyes) relative to center of display screen. Width of shutter stripe window is determined by hW=[vR*dL/dV−p]/(1+dL/dV). And dL must satisfies dL>dV (hW+p)/(vR−hW). Again, p is the picture pixel width or width of picture stripe (if more one pixel lines in the stripe). If for multiple viewers, simultaneously multiple-zone-scanning can be also used to increase the brightness and reduce the bandwidth.


Eye tracking algorithm for method II is same as described above for method I.


When width of picture pixel is close to the width of shutter grid, there will be so called grid mismatch problem, i.e. edge of picture pixel is off-site from edge of shutter grid. Grid mismatch problem will reduce the picture quality. However, if shutter grid density (vertical or column line density) is 2˜10 (5 may be good enough) times of column line density of picture, there will be no mismatch problem.

Claims
  • 1. intelligent device (ID) integrating both 3D movie recording and 3D movie playing back and the corresponding apparatuses, methods and algorithms, wherein ID means any one of cell-phone, PDA, iPhone, iPad, pocket PC or tablet, GPS, eBook, laptop or notebook, monitors of desktop computer, TV or iTV, etc, wherein, the 3D playing back apparatus is either glassless-based or glass-based, the apparatus either does or does not compress camera for eye tracking and corresponding algorithm;
  • 2. apparatus and method for 3D movie recording, wherein the apparatus comprises at least two cameras, and the two of the cameras are installed (in pair) on back side of ID at any locations for 2D and 3D recording, or also comprises one or two of the cameras are installed on front side (facing to viewer) of ID at any locations—for eye tracking if one camera installed, or for distance measurement plus eye tracking and for any other applications, such as 3D alive meeting or 3D alive chatting or 3D shopping or 3D selling on TV or on iPad or on computer monitor or smart phone, etc, if two cameras installed, wherein one or both of the cameras in a camera pair for 3D recording being built on intelligent devices, the two cameras in a recording camera pair are focus-adjustable simultaneously during 3D recording, for either numerical focusing or optic focusing, and are view-angle-adjustable simultaneously before or during 3D recording;
  • 3. apparatus and corresponding method for glassless 3D movie playing-back comprises display apparatus, process and system control, and algorithms, wherein display apparatus comprises image signal pixel-scanning-device with 2D scan for picture generation and shutter with 1D or 2D shutter grids for 3D display; wherein shutter is any kind of optical switch, such as but not limited to LCD, DMD, etc, wherein shutter comprises 1-dimesional grids with grid line along the vertical, used for constructing shutter window(s), or shutter comprises 2-dimesional grids, used for constructing shutter window(s);
  • 4. the 3D movie recording apparatus and method of claims 2, wherein one of the cameras in a camera pair of 3D recording apparatus being built on an extension which is built on ID, to increase the view depth beyond human eye capability, wherein the extension is a link of any one of mechanic extension link, optical fiber link, RF cable/wire link, or RF wireless link, wherein, when needed, the mechanic extension is able to slip out, and is able to rotate any angle, wherein the mechanic extension may or may not have secondary extension which is able to pulled out from or pushed into the primary extension;
  • 5. the 3D movie recording apparatus and method of claims 2, wherein two cameras in a camera pair are used as 3D projector cameras also when needed;
  • 6. the method and algorithm for glassless 3D movie playing-back of claims 3, wherein left eye picture and right eye picture are split into column stripes, each of stripes contains at least one signal pixel line, and left eye stripes and right eye stripes are re-combined alternately in time sequence into new frames of type L and type R which are actually displayed, wherein 1-dimesional shutter grid lines along vertical are divided into stripe shutter window(s), each stripe window comprises at least one (i.e. one or more) grid lines controlled by synchronize signal from control system, which will simultaneously switch on or simultaneously switch off the stripe windows which are respectively corresponding to left eye part or right eye part in re-combined frames during in the column by column pixel scanning of type L and type R frames;
  • 7. the method and algorithm for glassless 3D movie playing-back of claims 3, wherein left eye picture and right eye picture are not split into column stripes, but for instead, shutter window are directly scanned correspondingly together with the row by row scanning of signal pixel on left-eye picture, and then on right-eye picture after left eye picture scan is done, alternatively;
  • 8. the display method and algorithm of claims 7, wherein shutter window is stripe window along vertical or column, and is 1-dimensionally scanned, wherein stripe window comprises at least one (i.e. one or more) grid line, and the width of the stripe window is pre-determined by design, or dynamically determined by eye tracking result, according to the geometry relation;
  • 9. the display method and algorithm of claims 7, wherein shutter window is a small rectangular window, and is 2-dimensionally scanned, wherein rectangular window comprises the grid pixels in a rectangular at the cross of at least one column grids and at least one row grids, and the size of rectangular window is pre-determined by design, or dynamically determined by eye tracking result, according to the geometry relation;
  • 10. the method and algorithm for glassless 3D movie playing-back of claims 3, wherein algorithm comprises tracking algorithm and shutter grids control algorithm for eye or head motion tolerance.
CROSS-REFERENCE TO RELATED APPLICATION

The application claims the priority from U.S. provisional application No. U.S. 61/685,553, filed on Mar. 21, 2012 with post mail date on Mar. 17, 2012, and tilted “Apparatus and Method for Recording and Playing Back 3D Movies on Intelligent Devices”.