The present embodiments relate to calibration of a stereoscopic effect, and in particular, to methods, apparatus and systems for determining user preferences with regard to the stereoscopic effect.
Stereopsis comprises the process by which the human brain interprets an object's depth based upon the relative displacement of the object as seen from the left and right eyes. The stereoscopic effect may be artificially induced by taking first and second images of a scene from first and second laterally offset viewing positions and presenting the images separately to each of the left and right eyes. By capturing a succession of stereoscopic image pairs in time, the image pairs may be successively presented to the eyes to form a “three-dimensional movie.”
As the stereoscopic effect relies upon the user to integrate the left and right images into a single picture, user-specific qualities may affect the experience. Particularly, the disparity between objects in the left and right images will need to be correlated with a particular depth by the user's brain. While stereoscopic projectors and displays are regularly calibrated prior to use, an efficient and accurate means for rapidly determining a specific user's preferences, based on certain factors, for a given stereoscopic depiction remains lacking.
Certain embodiments contemplate a method, implemented on an electronic device, for determining a parameter for a stereoscopic effect. The method may comprise displaying to a user a plurality of images comprising a stereoscopic effect of an object, the object depicted at a plurality of three-dimensional locations by the plurality of images; receiving a preference indication from the user of a preferred three-dimensional location; and determining a parameter for stereoscopic depictions of additional images based upon the preference indication.
In certain embodiments, at least two of the plurality of locations may be displaced relative to one another in the x, y, and z directions. In some embodiments, the plurality of locations comprises a location having a positive depth position. In some embodiments, the plurality of images further comprise a stereoscopic effect of a second object, the second object depicted at a second plurality of locations by the plurality of images, the second plurality of locations comprising a location having a negative depth position. In some embodiments, the plurality of images depicts movement of the object in the plane of a display. In some embodiments, the plurality of images may be dynamically generated based on at least a screen geometry of a display. In some embodiments, the plurality of images may be dynamically generated based on at least the user's distance from a display. In some embodiments, the method further comprises storing the parameter to a memory. In some embodiments, the method further comprises determining a maximum range for depth of the object based upon the parameter. In some embodiments, the electronic device comprises a mobile phone. In some embodiments, the parameter is the preference indication.
Certain embodiments contemplate a computer-readable medium comprising instructions that when executed cause a processor to perform various steps. The steps may include: displaying to a user a plurality of images comprising a stereoscopic effect of an object, the object depicted at a plurality of locations by the plurality of images; receiving a preference indication from the user of a preferred three-dimensional location; and determining a parameter for stereoscopic depictions of additional images based upon the preference indication.
In some embodiments, at least two of the plurality of locations are displaced relative to one another in the x, y, and z directions. In some embodiments, the plurality of locations comprises a location having a positive depth position. In some embodiments, the plurality of images further comprise a stereoscopic effect of a second object, the second object depicted at a second plurality of locations by the plurality of images, the second plurality of locations comprising a location having a negative depth position. In some embodiments, the plurality of images depicts movement of the object in the plane of the display.
Certain embodiments contemplate an electronic stereoscopic vision system, comprising: a display; a first module configured to display a plurality of images comprising a stereoscopic effect of an object, the object depicted at a plurality of locations by the plurality of images; an input configured to receive a preference indication from the user of a preferred three-dimensional location; and a memory configured to store a parameter associated with the preference indication, wherein the parameter is used to display additional images according to the preference indication of the user.
In certain embodiments, at least two of the plurality of locations are displaced relative to one another in the x, y, and z directions. In some embodiments, the plurality of locations comprises a location having a positive depth position. In some embodiments, the plurality of images further comprise a stereoscopic effect of a second object, the second object depicted at a second plurality of locations by the plurality of images, the second plurality of locations comprising a location having a negative depth position. In some embodiments, the plurality of images depicts movement of the object in the plane of the display. In some embodiments, the plurality of images is dynamically generated based on at least a screen geometry of the display. In some embodiments, the plurality of images is dynamically generated based on at least the user's distance from the display. In some embodiments, the electronic device comprises a mobile phone. In some embodiments, the parameter is the preference indication.
Certain embodiments contemplate a stereoscopic vision system in an electronic device, the system comprising: means for displaying to a user a plurality of images comprising a stereoscopic effect of an object, the object depicted at a plurality of locations by the plurality of images; means for receiving a preference indication from the user of a preferred three-dimensional location; and means for determining a parameter for stereoscopic depictions of additional images based upon the preference indication.
In some embodiments, the displaying means comprises a display, the depicting means comprises a plurality of images, the means for receiving a preference indication comprises an input, and the means for determining a stereoscopic parameter comprises a software module configured to store a preferred range. In some embodiments, at least two of the plurality of locations are displaced relative to one another in the x, y, and z directions.
The disclosed aspects will hereinafter be described in conjunction with the appended drawings, provided to illustrate and not to limit the disclosed aspects, wherein like designations denote like elements.
Embodiments relate to systems for calibrating stereoscopic display systems so that presentation of the stereoscopic video data to a user is perceived as comfortable to the user's eyes. Because different users may have differing tolerances for how they perceive stereoscopic videos, systems and methods described herein allow a user to modify certain stereoscopic display parameters to make viewing the video comfortable to the user. In one embodiment, a user can modify stereoscopic video parameters in real-time as the user is viewing a stereoscopic video. These modifications are then used to display the stereoscopic video to the user in a more comfortable format.
Present embodiments contemplate systems, apparatus, and methods to determine a user preference with regard to the display of stereoscopic images. Particularly, a stereoscopic video sequence is presented to a user in one embodiment. The system takes calibration input from the user wherein the user input may not require a user to possess extensive knowledge of 3D technology. For example, a user may select “less” or “more” of a three-dimensional effect in the video sequence being viewed. The system would input that information and reduce or increase the three-dimensional effect being presented within the video sequence by altering the angular or lateral disparity of the left eye and right eye images presented to the user.
One skilled in the art will recognize that these embodiments may be implemented in hardware, software, firmware, or any combination thereof. The stereoscopic display may be found on a wide range of electronic devices, including mobile wireless communication devices, personal digital assistants (PDAs), laptop computers, desktop computers, televisions, digital cameras, digital recording devices, and the like.
An input 103, either attached to display device 105, or operating remotely, may be used to provide user input to display device 105. In some embodiments, the input 103 may comprise input controls attached to, or integrated into the housing of, display device 105. In other embodiments, the input 103 may comprise a wireless remote control, such as are used with televisions. Input 103 may be configured to receive key or button presses or motion gestures from the user, or any other means for receiving a preference indication. In some embodiments, buttons on input 103 designated for other purposes, such as for selecting a channel, adjusting the volume, or entering a command, 103a may be repurposed to receive input regarding the calibration procedure. In some embodiments, buttons specifically designed for receiving calibration inputs 103b may be provided. In gesture sensitive inputs 103c, the system may recognize certain gestures on a touchscreen (via a finger, stylus, etc.) during the calibration procedure as being related to calibration. The inputs may be used to control the calibration procedure, as well as to indicate preferred parameters. For example, in some embodiments depressing the channel selection button or making a finger motion may alter the motion of the plurality of objects 104 during calibration. Depressing an “enter” key or a “Pop-In” or “Pop-Out” selection key may be used to identify a preferred maximum range parameter. Input 103 may also comprise “observational inputs” such as a camera or other device which monitors the user's behavior, such as characteristics of the user's eye, in response to particular calibration stimuli.
Database storage 106, though depicted outside device 105 may comprise means for storing data, such as an internal or external storage to device 105 wherein the user's preferences may be stored. In some embodiments, database 106 may comprise a portion of device 105's internal memory. In some embodiments, database 106 may comprise a central server system external to device 105. The server system may be accessible to multiple display devices so that the preferences determined on one device are available to another device.
As mentioned, when depicting a stereoscopic scene, device 105 may present objects 104 to the user as moving in any of the directions x, y, z. Movement in the z direction may be accomplished via the stereoscopic effect.
Conversely, as shown in
Different users' brains may integrate object disparity between the images of
Unfortunately, in some circumstances cataloguing a user's lateral and angular disparity preferences in isolation from other factors may not suffice to avoid user discomfort. Lateral and angular disparities may interrelate with one another, and with other factors, holistically when a user perceives the stereoscopic effect. For example, with reference to
Certain of the present embodiments contemplate displaying an interactive stereoscopic video sequence to the user and receiving input from the user to determine the user's preferred ranges of the stereoscopic effect. The interactive video sequence may be especially configured to determine the user's lateral and angular disparity preferences at a given distance from display 102. In some embodiments, the user may specify their distance from display 102 in advance. In other embodiments, the distance may be determined using a range-finder or similar sensor on device 105. In certain embodiments, the video sequence may comprise moving objects that appear before and behind the plane of the display (i.e., in positive and negative positions in the z-direction). As the user perceives the objects' motion, the user may indicate positive and negative depths at which they feel comfort or discomfort. These selections may be translated into the appropriate 3D depth configuration parameter to be sent to the 3D processing algorithm. In some embodiments, a single image depicting a plurality of depths may suffice for determining the user's preferences. In some embodiments, the video may be dynamically generated based upon such factors as the user's previous preferences, data, such as a user location data, derived from other sensors on device 104, and user preferences from other stereoscopic devices (such as devices which have been previously calibrated but possess a different screen geometry). In some embodiments, the video sequence may be generated based on a screen geometry specified by the user. In some embodiments, the screen geometry may be automatically determined.
With reference to
One skilled in the art will recognize that once the maximum pop-in and pop-out ranges are determined, numerous corresponding values may be stored in lieu of the actual ranges. Thus, in some embodiments, the stored preferences or parameters may comprise the values of the preferred pop-in and pop-out ranges (i.e., the maximum pop-in value and the maximum pop-out value). However, in other embodiments the corresponding disparity ranges for objects appearing in each image may instead be stored. In some embodiments, the position and orientation of the virtual cameras used to generate the images which correspond to the user's preferred ranges may be stored. In this case, the stored preference may be used when dynamically generating a subsequent scene. As mentioned, database 106 may in some embodiments provide other display devices with access to the user's preferences so that it is unnecessary for the user to recalibrate each system upon use. Software modules configured to store the user preferred ranges, table lookups to associate a preferred range with one or more variables affecting the display of stereoscopic images, software making reference to such a lookup table and other means for determining a parameter based upon a preference indication will be readily recognized by one skilled in the art. Thus, in some instances the determining means may simply identify the user indicated range as a parameter to be stored. Alternatively, the determining means may identify a value for a display variable, such as the disparity, corresponding to the range. The maximum disparity value, rather than the user defined range may then be stored.
Certain of the embodiments, such as the embodiments of
One will recognize that the order in which the negative and positive depth preferences are determined may be arbitrary and in some embodiments may occur simultaneously. The video sequence may, for example, simultaneously display pairs of objects at locations in the x, y, and z directions known to comprise extrema for user preferences. By selecting a pair, the user may indicate both a positive and negative depth preference with a single selection. In some instances, it may be necessary to display only a single stereoscopic image.
Once the system has determined a user's preferences the preferences may be stored for use during subsequent displays. Alternatively, some embodiments contemplate converting the preference to one or more display parameters for storage instead. For example, a user preference may be used to determine a maximum scaling factor for positive and negative depth during display. Storing the scaling factors or another representation may be more efficient than storing depth ranges. Additional data, such as data regarding the user's location relative to the display 102, may also be converted into appropriate parameters prior to storage in database 106.
The various illustrative logical blocks, modules, and circuits described in connection with the implementations disclosed herein may be implemented or performed with a general purpose processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general purpose processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration.
The steps of a method or process described in connection with the implementations disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, a removable disk, a CD-ROM, or any other form of non-transitory storage medium known in the art. An exemplary computer-readable storage medium is coupled to the processor such the processor can read information from, and write information to, the computer-readable storage medium. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an ASIC. The ASIC may reside in a user terminal, camera, or other device. In the alternative, the processor and the storage medium may reside as discrete components in a user terminal, camera, or other device.
Headings are included herein for reference and to aid in locating various sections. These headings are not intended to limit the scope of the concepts described with respect thereto. Such concepts may have applicability throughout the entire specification.
The previous description of the disclosed implementations is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these implementations will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other implementations without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the implementations shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
This application claims the benefit under 35 U.S.C. Section 119(e) of co-pending and commonly-assigned U.S. Provisional Patent Application Ser. No. 61/489,224, filed on May 23, 2011, by Kalin Atanassov, Sergiu Goma, Joseph Cheung, and Vikas Ramachandra, entitled “INTERACTIVE USER INTERFACE FOR STEREOSCOPIC EFFECT ADJUSTMENT,” which application is incorporated by reference herein.
Number | Date | Country | |
---|---|---|---|
61489224 | May 2011 | US |