1. The Field of the Invention
This invention relates to systems, methods, and computer program products related to conversion and presentation of three-dimensional video content.
2. Background and Relevant Art
Three-dimensional (3D) display technology involves presenting two-dimensional images in such a manner that the human brain perceives the images as being 3D. The process typically involves presenting “left” image data to the left eye, and “right” image data to the right eye. When received, the brain perceives this data as a 3D image. 3D display technology generally incorporates the use of a filtering or shuttering device, such as stereoscopic glasses, which filter displayed image data to the correct eye. Filtering devices can comprise passive configurations, meaning that the filtering device filters image data passively (e.g., by color code or by polarization), or active configurations, meaning that the filtering device filters image data actively (e.g., by shuttering or “blanking”).
Traditional display devices, such as computer monitors, television sets, and portable display devices, have been either incapable of producing suitable image data for 3D viewing, or have produced an inferior 3D viewing experience using known devices and processes. For instance, viewing 3D content from traditional display devices generally results in blurry images and/or images that have “ghosting” effects, both of which may cause dizziness, headache, discomfort, and even nausea in the viewer. This is true even for display devices that incorporate more recent display technologies, such as Liquid Crystal Display (LCD), Plasma, Light Emitting Diode (LED), Organic Light Emitting Diode (OLED), etc.
Recently, 3D display devices designed specifically for displaying 3D content have become increasingly popular. These 3D display devices are generally used in connection with active filtering devices (e.g., shuttering glasses) to produce 3D image quality not previously available from traditional display devices. These 3D display devices, however, are relatively expensive when compared to traditional display devices.
As a result, consumers who desire to view 3D content face the purchase of expensive 3D display devices, even when they may already have traditional display devices available. Accordingly, there a number of considerations to be made regarding the display of 3D content.
Implementations of the present invention provide devices, methods, and computer program products configured to enable the viewing of three-dimensional (3D) content on a broad range of display devices. When employing one or more implementations of the present invention, a viewer can view 3D content at display devices not specifically designed for 3D content display, while experiencing a level of quality that can match or even exceed the quality of specialized 3D display devices. Accordingly, implementations of the present invention can eliminate the need to purchase a 3D-specific display device by enabling viewers to view 3D content on traditional display devices in a high-quality manner.
For example, an implementation of a video conversion device can include a video signal input interface device adapted to receive a 3D video signal. The video conversion device can also include one or more programmable processing units. The processing units convert the received 3D video signal to the output video signal, which is specifically adapted for display on the destination display device. The processing units can also generate a shuttering signal, which instructs a stereographic shuttering device to shutter a user's view of the output video signal. Along these lines, the video conversion device can also include a shuttering signal transmitter device, which is adapted to transmit the generated shuttering signal to the stereographic shuttering device. Additionally, the video conversion device can also include a video signal output interface device adapted to send an output video signal to a particular destination display device.
Additionally, an implementation of an accessory device can include an interface device adapted to communicatively interface with a computing system, and a shuttering signal transmission device adapted to transmit a shuttering signal to a stereographic shuttering device. In addition, the accessory device can include one or more computerized storage devices that include executable instructions for converting a 3D video signal to a format adapted for display on a destination display device. The storage device can also include executable instructions for converting a 3D video signal to a format adapted for display on a destination display device, generating the shuttering signal, and instructing the shuttering signal transmission device to send the shuttering signal to the stereographic shuttering device. The shuttering signal can include a shuttering instruction which instructs the stereographic shuttering device to shutter an inter-frame transition between first eye 3D content and second eye 3D content from a user's view.
In addition to the forgoing, one or more computer storage devices can include computer-executable instructions that when executed by one or more processors of a computer system, cause the computer system to implement a method for configuring the computer system to convert three-dimensional (3D) video content for a low frame-rate display device. The method can involve converting an input 3D video signal to an output video signal. The output video signal can include an alternating sequence of one or more first video frames that include a first image for viewing by a first eye and one or more second video frames that include a second image for viewing by a second eye. The method can also involve generating an inter-frame shuttering signal configured to instruct a shuttering device to concurrently shutter both the first eye and the second eye during a display of an inter-frame transition. An inter-frame transition can occur when, after sending the output video signal to a display device, a portion of the “first eye” video frames and a portion of the “second eye” video frames will be displayed concurrently at the display device.
This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed subject matter.
Additional features and advantages of exemplary implementations of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by the practice of such exemplary implementations. The features and advantages of such implementations may be realized and obtained by means of the instruments and combinations particularly pointed out in the appended claims. These and other features will become more fully apparent from the following description and appended claims, or may be learned by the practice of such exemplary implementations as set forth hereinafter.
In order to describe the manner in which the above-recited and other advantages and features of the invention can be obtained, a more particular description of the invention briefly described above will be rendered by reference to specific embodiments thereof which are illustrated in the appended drawings. It should be noted that the figures are not drawn to scale, and that elements of similar structure or function are generally represented by like reference numerals for illustrative purposes throughout the figures. Understanding that these drawings depict only typical embodiments of the invention and are not therefore to be considered to be limiting of its scope, the invention will be described and explained with additional specificity and detail through the use of the accompanying drawings in which:
Implementations of the present invention provide devices, methods, and computer program products configured to enable the viewing of three-dimensional (3D) content on a broad range of display devices. When employing one or more implementations of the present invention, a viewer can view 3D content at display devices not specifically designed for 3D content display, while experiencing a level of quality that can match or even exceed the quality of specialized 3D display devices. Accordingly, implementations of the present invention can eliminate the need to purchase a 3D-specific display device by enabling viewers to view 3D content on traditional display devices in a high-quality manner.
Specialized 3D display devices attempt to provide an enhanced 3D viewing experience by modifying physical characteristics of the display device, such as by increasing the frame-rate and by decreasing a frame overlap interval. The frame-rate refers to the number of unique video frames the display device can render in a given amount of time (e.g., one second). Frame overlap interval refers to the period of time that elapses when transitioning between two frames. During the frame overlap interval, the display device displays at least a portion of two or more video frames concurrently. Longer frame overlap intervals are perceptible to the human eye, and can lead to a degraded viewing experience. For example, longer frame overlap intervals can cause motion blurring or ghosting. These effects are a particular problem when viewing 3D video content.
One or more implementations of the present invention provide for a video conversion device that converts incoming 3D content to a format adapted to a particular display device, such as a lower frame-rate display device. For example, the conversion device can generate an output video signal formatted in a manner understood by a particular destination display device, which takes into account physical characteristics of the destination device (e.g., frame-rate, frame overlap). The video conversion device can also compensate for longer overlap intervals exhibited by the display device by generating and transmitting a shuttering signal that shutters (or occludes) a user's view of the display device during frame overlap intervals. Thus, the video conversion device can facilitate viewing of 3D content on a broad range of display devices, including display devices that have lower frame-rates (and longer frame overlap intervals), while overcoming undesirable effects, such as motion blurring and ghosting.
One or more implementations also provide for an accessory device (e.g., a Universal Serial Bus (USB) device) that can enable a variety of computing systems to send 3D content to lower frame-rate displays. Thus, the accessory device can extend the functionality of existing devices (e.g., general purpose computing systems, gaming systems, tablet computers, smart phones, set-top boxes, optical disc players, etc.) to enable the devices to send 3D content to attached or integrated display devices (which can have lower frame-rates). Similarly, one or more implementations also provide methods for configuring computing systems to convert/send 3D content to lower frame-rate display devices.
The video conversion device 100 can include a video signal input interface device 102 (video input port) adapted to receive a 3D video signal. As indicated, the video input port 102 can include any number of constituent input ports or interface devices. For instance, the video input port 102 can include one or more digital video input devices, such as High-Definition Multimedia Interface (HDMI) input(s) 102a, DisplayPort input(s) 102b, Digital Visual Interface (DVI) input(s) 102c, etc. While not shown, the video input port 102 can also include any number of analog video inputs, such as composite, component, or coaxial inputs, to name a few. As indicated by the ellipses 102e, the video conversion device 100 can include any appropriate type and number of input ports, such as a LAN/WAN input 102d (e.g., RJ-45 or WIFI).
Similarly,
The video conversion device 100 can include any number of other appropriate external interface devices/ports. For instance, the video conversion device 100 can include one or more user input device(s) 108. As indicated, the user input device(s) 108 can include any combination of switches 108a, buttons 108b, wireless receivers 108c (e.g., Infrared, BLUETOOTH, WIFI), wired receivers 108d (e.g., USB), or others 108e. The video conversion device 100 can also include one or more transmitter devices 110 (e.g., a shuttering signal transmitter device adapted to transmit a shuttering signal). As shown, the transmitter device 110 can comprise an infrared transmitter device 110a, BLUETOOTH transmitter device 110b, WIFI transmitter device 110c, or another transmitter device 110d. Use of the user input device(s) 108 and the transmitter device 110 are discussed in more detail herein after.
The video conversion device 100 can also include one or more processing units 106. For convenience in description,
As indicated by the arrows in
To perform the foregoing conversion, the processing units 106 can include a plurality of constituent processing units, can implement a plurality of logical modules or components, or can use a combination of the two. The processing units 106 can, for instance, include a decoder 106a and an encoder 106b. The decoder 106a can receive the 3D video signal (which may comprise any number of various known 3D formats) and decode it into one or more internal buffers for conversion to the output format. For example, the 3D video signal can comprise one or more video frames that encode left-perspective image data and right-perspective image data. The decoder 106a can detect the format used to encode this data, and then decode left-perspective data into one buffer and decode right-perspective data into another buffer. The decoder 106a can encode standard (e.g., VESA) content to standard or non-standard formats.
Once at least a portion of the 3D video signal is decoded, the encoder 106b can encode image data from the buffer(s) into the output format. In one or more implementations, the output format can comprise a sequence of one or more left-perspective video frame(s) alternating with one or more right-perspective video frames, or vice versa. The processing units 106 can first pass the one or more video frames containing image data for one eye (e.g., one or more left perspective frames) to the video output port 104 via an output video signal. The processing units 106 can subsequently pass one or more video frames containing image data for the other eye (e.g., one or more right perspective frames) to the video output port 104 via the output video signal.
As part of the conversion process, the processing units 106 can employ a digital-to-analog converter (DAC) and/or an analog-to-digital converter (ADC). The encoder 106a and/or the decoder 106b can include these converters. Alternatively, the converters are one or more separate components (e.g., DAC/ADC 106c). When the received 3D video signal is digital (e.g., from an HDMI connection), the processing units 106 can use the DAC produce an output video signal that is analog (e.g., component or composite). Such converters can allow for the conversion and display of 3D content on older devices that may have the capabilities of receiving digital content. The inverse is also true, and the processing units 106 can convert an analog 3D video signal to an output video signal that is digital using the ADC.
The processing units 106 can also include a shuttering signal generator component 106d, which can generate a shuttering (or sync) signal to assist with 3D viewing. The video conversion device 100 can transmit the generated shuttering signal to one or more shuttering devices (e.g., stereographic shuttering glasses 204,
As mentioned, while encoding (and decoding) the processing units 106 can adapt the output video signal for a particular destination display device. This can include any combination of generating customized types of output video frames (e.g., interlaced or progressive) and/or generating customized sizes of output video frames (e.g., 480, 720, or 1080 vertical lines). This can also include generating output video frames at a target frame-rate (e.g., 60 Hz, 120 Hz). When generating output video frames at a target frame-rate, the processing units 106 can send the frames to the video output port 104 at a rate that would cause the display device to receive a target number of frames per second.
In order to adapt the output format to the destination display device, the processing units 106 can include a detection module or component 106e. The detection module 106e can receive physical characteristic information about the destination display device and provide this information to the other modules or components, such as the decoder 106a and/or the encoder 106b. The detection module 106e can receive the physical characteristic information from a user via the user input device(s) 108, or directly from the display device (e.g., via an HDMI connection). The physical characteristic information can include any appropriate information, such as frame size and frame-rate capabilities of the display device, an inter-frame overlap interval of the display device, etc.
In one or more implementations, receiving physical characteristic information via the user input device(s) 108 can involve receiving specific physical characteristic information about the particular destination display device. The user can, for example, use a wireless remote control to enter or select a make and model of the particular destination display device. The detection module 106e can use this information look up the physical characteristics of the particular destination display device from a local or remote database. Alternatively, the user can use buttons or switches on the video conversion device 100 to select particular physical characteristics of the particular destination display device (e.g., frame rate).
In one or more implementations, receiving physical characteristic information via the user input device(s) 108 can also involve inference and/or learning techniques. In a configuration mode, for instance, the video conversion device 100 can send configuration information for display at the display device in various different formats, while also sending a corresponding shuttering signal to a shuttering device. The user can then provide appropriate feedback about his or her perception of the displayed configuration information, as viewed through the shuttering device, via buttons, wireless communication, etc. Based on the sent configuration information, and the corresponding feedback received, the video conversion device 100 can infer the physical characteristics of the display device.
One will appreciate in light of the disclosure herein that the video conversion device 100 can include any number of additional physical or software-based components or modules (as indicated by the ellipses 106f), or can contain a fewer number of components or modules. Accordingly, the video conversion device 100 can depart from the illustrated form without departing from the scope of this disclosure. As mentioned, other devices may incorporate functionality of the video conversion device. In these instances, the video conversion device may not include some of the illustrated components. For example, if a media device incorporates functionality of the video conversion device 100, the video input device 102 may or may not be included.
The video conversion device 100 can communicate with other devices using any of the hardware components discussed herein above (e.g., the video input port 102, the video output port 104, or the transmitter 110). An appropriate wired (e.g., HDMI, component, composite, coaxial, network) or wireless (BLUETOOTH, Wi-Fi) mechanism can couple the video output port 104 and the display device 202 together. Likewise, an appropriate wired or wireless mechanism can couple the video input port 102 to a media device. Furthermore, an appropriate wireless mechanism (e.g., BLUETOOTH, infrared, etc.) can couple the video conversion device 100 and the blanking device(s) 204 together.
The display device 202 can comprise any one of a broad range of display devices that incorporate a variety of display technologies, both current and future (e.g., Cathode Ray, Plasma, LCD, LED, OLED). The display device 202 can take any of a number of forms, such as a television set, a computer display (e.g., desktop computer monitor, laptop computer display, tablet computer display), a handheld display (e.g., cellular telephone, PDA, handheld gaming device, handheld multimedia device), or any other appropriate form. While the display device 202 can have a configuration designed specifically to display 3D content, the destination display device 202 alternatively can comprise a more traditional display device, such as a lower frame-rate device. One will appreciate in light of the disclosure herein, that the display device 202 can include both digital and analog display devices.
The shuttering device(s) 204 can comprise any shuttering device configured to interoperate with video conversion device 100, and to respond to one or more shuttering instructions received via a shuttering signal. In one or more implementations, the shuttering device(s) 204 comprise stereographic shuttering glasses, with lenses that include one or more liquid crystal layers. The liquid crystal layers can have the property of becoming opaque (or substantially opaque) when voltage is applied (or, alternatively, when voltage is removed). The liquid crystal layers can otherwise have the property being transparent (or substantially transparent) when voltage is removed (or, alternatively, when voltage is applied). The shuttering device(s) 204 can thus apply or remove voltage from the lenses to block the user's view, as instructed by the shuttering signal.
As mentioned herein above, the video conversion device 100 can generate and send a shuttering signal to one or more shuttering devices 204.
Referring to display states 302 and 306, the video conversion device 100 can provide the illusion that two-dimensional images encoded in the output video signal are 3D. In state 302, for example, the video conversion device 100 can transmit one or more left-perspective video frames in an output video signal 324 to the display device 202, and can also transmit a shuttering instruction 314 (occlude right) to the shuttering device(s) 204. Thus, when the display device 202 displays a left-perspective image 308, a shuttering component 322 can occlude the viewer's right eye view of the display device 202. Similarly, in state 306, the video conversion device 100 can transmit one or more right-perspective video frames in the output signal 324 to the display device 202, and can also transmit a shuttering instruction 318 (occlude left) to the shuttering device(s) 204. Thus, when the display device 202 displays a right-perspective image 312, the shuttering component 320 can occlude the viewer's left eye view of the display device 202.
Alternatively, the video conversion device 100 can reverse the images and shuttering instructions of states 302 and 306. In state 302, for example, the video conversion device 100 can alternatively send a right-perspective image to the display device 202 and can send an “occlude left” instruction to the shuttering device(s) 204. Similarly, in state 306, the video conversion device 100 can send a left-perspective image to the display device 202 and can send an “occlude right” instruction to the shuttering device(s) 204. One will appreciate in light of the disclosure herein that the illustrated sequence of images and instructions is not limiting.
While display states 302 and 306 provide the illusion of 3D content display, one or more implementations introduce a third display state 304, during which the video conversion device 100 occludes an inter-frame overlap 310. Inter-frame overlap 310 occurs after the video conversion device 100 has fully transmitted image data for one eye (e.g., left-perspective video frames), and has begun to transmit image data for the other eye (e.g., right-perspective video frames). During inter-frame overlap, physical limitations of the display device 202, can cause portions of the different frames to “blend,” so that portions of both the left and right perspective images are concurrently displayed. The video conversion device 100 can occlude at least a portion of this overlap by transmitting a shuttering instruction 316 (occlude both) to the shuttering device(s) 204, which causes the shuttering device(s) 204 to occlude both eyes concurrently.
Inter-frame shuttering, or the occlusion of both eyes during inter-frame overlap intervals, can enhance the clarity of the perceived 3D image. Inter-frame shuttering can reduce or eliminate the undesirable effects common to 3D content display, such as motion blurring and ghosting. Thus, inter-frame shuttering techniques, when synchronously combined with the creation of an output video signal adapted to a particular display device, can allow for viewing of 3D content on display devices that may have lower frame-rates and/or longer frame overlap intervals.
The video conversion device 100 can cease transmitting the left frame(s) 410 at a time 406, and begin transmitting right-perspective video frame(s) 412. The video conversion device 100 can base the timing of the transition between the left and right frames on a target frame-rate of the output video signal, which is adapted to the destination display device 202. Based on physical characteristic information about the destination display device 202, the video conversion device 100 can determine a display state 304 from time 406 to a time 408. During this period, the display device 202 will display an inter-frame overlap (310,
Next, during display state 306, the destination display device 202 will have transitioned past the inter-frame overlap 310 and will display only the right frame(s) 412. The video conversion device 100 can send an appropriate shuttering instruction 318 (occlude left). Subsequently, the video conversion device 100 can send other left frame(s), other right frame(s), and so on. These frames can include new image data from the received 3D video signal, or can include the same data sent previously (i.e., to increase the frame-rate of the output signal). Correspondingly, the video conversion device 100 can send corresponding shuttering instructions (as shown).
One will also appreciate that, while
One or more implementations also extend to devices adapted for use in configuring a computing system to convert 3D video content for low frame-rate display devices.
The accessory device 500 can include a variety of constituent components. In one or more implementations, the accessory device 500 can include an interface device 502 adapted to communicatively interface with the associated computing system (e.g., a USB interface, an IEEE 1394 interface, an APPLE Dock interface). The accessory device 500 can also include a shuttering signal transmission device 504 (transmitter). Like the transmitter 110 of the video conversion device 100, the transmitter 504 can transmit a shuttering signal to stereographic shuttering devices, and can use any appropriate signal type (e.g., Infrared, BLUETOOTH, WIFI). The associated computing system can process/convert 3D video content, generate a shuttering signal, and send the generated shuttering signal to one or more shuttering devices via the transmitter 504 on the accessory device 500.
The associated computing system run computer-executable instructions received as part of, or separate from, the accessory device 500. For example, the associated computing system can receive instructions via a storage device provided at the associated computing system (e.g., a CD-ROM, FLASH memory, etc.), via an Internet download, etc. Alternatively, the associated computing system can receive instructions from the accessory device 500. The accessory device 500 can include, for example, one or more computerized storage devices 506 storing computer-executable instructions.
The stored computer-executable instructions can instruct one or more processing units to convert a 3D video signal to a format adapted for display on a particular destination display device. The instructions can, for instance, instruct one or more processors at the associated computing system to perform the conversion. The instructions can also instruct one or more processing units 508 on the accessory device 500, to perform, or to help perform, the conversion. In this manner, the accessory device 500 can offload some or all of the computation needed to perform the conversion from the associated computing system.
The stored computer-executable instructions can also cause one or more processing units to generate a shuttering signal. The shuttering signal can include one or more inter-frame shuttering instructions for shuttering an inter-frame transition between first-eye and second-eye content (as discussed in connection with
As indicated, the accessory device 500 can include one or more processors or processing units 508. Similar to the video conversion device 100, these processing units 508 can comprise any number of processing units, which can each perform a single function, or a variety of functions. The processing units 508 can thus comprise programmable processing units (e.g., FPGAs, microcontrollers), dedicated processing units, or a combination of each. In addition, an appropriate communications channel 510 (e.g., one or more buses) can couple each component of the accessory device 500. Additionally, similar to the video conversion device 100, the processing units 508 can implement a series of processing components, such as decoder(s), encoder(s), ADC, DAC, etc.
The accessory device 500 can also enable 3D content conversion and/or display with other devices as well, such as a gaming device 514 (e.g., XBOX, PLAYSTATION), a DVD/BLU-RAY player 516, or a tablet computer 518. In each environment, the accessory device 500 can include hardware interfaces and/or computer instructions customized to the particular device. It is noted that more specialized devices (e.g., DVD/BLU-RAY players 516 or tablet computers 518), may have limited processing or configuration capabilities. Thus, the inclusion of processing units 508 on the accessory device 500 can enable these devices to process/convert 3D content.
Implementations of the present invention can also be described in terms of flowcharts comprising one or more acts in a method for accomplishing a particular result.
As illustrated, a method can comprise an act 602 of converting an input 3D video signal. Act 602 can include converting an input 3D video signal to an output video signal which includes an alternating sequence of one or more first video frames that include a first image for viewing by a first eye and one or more second video frames that include a second image for viewing by a second eye. For example, the video conversion device 100 can receive a 3D video signal and convert the received 3D content to a format adapted for a particular destination display device. Similarly, the accessory device 500 can configure a general or special purpose computing system to convert 3D video content. The accessory device can, for example, include computer-executable instructions which cause processors at the computing system, or at the accessory device 500, to convert 3D content to an output signal adapted for a particular display device. As disclosed, adapting the output signal can include determining an optimal frame rate for the display device, which can comprise a low frame-rate display device.
The illustrated method can also comprise an act 604 of generating an inter-frame shuttering signal. Act 604 can include generating an inter-frame shuttering signal configured to instruct a shuttering device to concurrently shutter both the first eye and the second eye during a display of an inter-frame transition. The inter-frame transition can comprise a period during which at least a portion of the one or more first video frames and at least a portion of the one or more second video frames are displayed concurrently. For example, the video conversion device 100 or the accessory device 500 can generate, or cause an associated computing system to generate, a shuttering signal that includes a plurality of shuttering instructions. As illustrated in
Accordingly,
The implementations of the present invention can comprise a special purpose or general-purpose computing systems. Computing systems may, for example, comprise handheld devices, appliances, laptop computers, desktop computers, mainframes, distributed computing systems, or even devices that have not conventionally considered a computing system, such as DVD players, BLU-RAY Players, gaming systems, and video converters. In this description and in the claims, the term “computing system” is defined broadly as including any device or system (or combination thereof) that includes at least one physical and tangible processor, and a physical and tangible memory capable of having thereon computer-executable instructions, which the processor may execute.
The memory may take any form and may depend on the nature and form of the computing system. A computing system may be distributed over a network environment and may include multiple constituent computing systems. In its most basic configuration, a computing system typically includes at least one processing unit and memory. The memory may be physical system memory, which may be volatile, non-volatile, or some combination of the two. The term “memory” may also be used herein to refer to non-volatile mass storage such as physical storage media. If the computing system is distributed, the processing, memory and/or storage capability may be distributed as well. As used herein, the term “module” or “component” can refer to software objects or routines that execute on the computing system. The different components, modules, engines, and services described herein may be implemented as objects or processes that execute on the computing system (e.g., as separate threads).
Implementations of the present invention may comprise or utilize a special purpose or general-purpose computer including computer hardware, such as, for example, one or more processors and system memory, as discussed in greater detail below. Embodiments within the scope of the present invention also include physical and other computer-readable media for carrying or storing computer-executable instructions and/or data structures. Such computer-readable media can be any available media that can be accessed by a general purpose or special purpose computer system. Computer-readable media that store computer-executable instructions are physical storage media. Computer-readable media that carry computer-executable instructions are transmission media. Thus, by way of example, and not limitation, embodiments of the invention can comprise at least two distinctly different kinds of computer-readable media: computer storage media and transmission media.
Computer storage media includes RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store desired program code means in the form of computer-executable instructions or data structures and which can be accessed by a general purpose or special purpose computer.
A “network” is defined as one or more data links that enable the transport of electronic data between computer systems and/or modules and/or other electronic devices. When information is transferred or provided over a network or another communications connection (either hardwired, wireless, or a combination of hardwired or wireless) to a computer, the computer properly views the connection as a transmission medium. Transmissions media can include a network and/or data links which can be used to carry or desired program code means in the form of computer-executable instructions or data structures and which can be accessed by a general purpose or special purpose computer. Combinations of the above should also be included within the scope of computer-readable media.
Further, upon reaching various computer system components, program code means in the form of computer-executable instructions or data structures can be transferred automatically from transmission media to computer storage media (or vice versa). For example, computer-executable instructions or data structures received over a network or data link can be buffered in RAM within a network interface module (e.g., a “NIC”), and then eventually transferred to computer system RAM and/or to less volatile computer storage media at a computer system. Thus, it should be understood that computer storage media can be included in computer system components that also (or even primarily) utilize transmission media.
Computer-executable instructions comprise, for example, instructions and data which, when executed at a processor, cause a general purpose computer, special purpose computer, or special purpose processing device to perform a certain function or group of functions. The computer executable instructions may be, for example, binaries, intermediate format instructions such as assembly language, or even source code. Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the described features or acts described above. Rather, the described features and acts are disclosed as example forms of implementing the claims.
Those skilled in the art will appreciate that the invention may be practiced in network computing environments with many types of computer system configurations, including, personal computers, desktop computers, laptop computers, message processors, hand-held devices, multi-processor systems, microprocessor-based or programmable consumer electronics, network PCs, minicomputers, mainframe computers, mobile telephones, PDAs, pagers, routers, switches, and the like. The invention may also be practiced in distributed system environments where local and remote computer systems, which are linked (either by hardwired data links, wireless data links, or by a combination of hardwired and wireless data links) through a network, both perform tasks. In a distributed system environment, program modules may be located in both local and remote memory storage devices.
The present invention may be embodied in other specific forms without departing from its spirit or essential characteristics. The described embodiments are to be considered in all respects only as illustrative and not restrictive. The scope of the invention is, therefore, indicated by the appended claims rather than by the foregoing description. All changes which come within the meaning and range of equivalency of the claims are to be embraced within their scope.
The present application is a U.S. National Stage Application corresponding to PCT Patent Application No. PCT/US2011/031115, filed Apr. 4, 2011, which claims priority to U.S. Provisional Application No. 61/416,708, filed Nov. 23, 2010, entitled “3D VIDEO CONVERTER.” The present application is also a continuation-in-part of: PCT Patent Application No. PCT/US2011/025262, filed Feb. 17, 2011, entitled “BLANKING INTER-FRAME TRANSITIONS OF A 3D SIGNAL;” PCT Patent Application No. PCT/US2011/027175, filed Mar. 4, 2011, entitled “FORMATTING 3D CONTENT FOR LOW FRAME-RATE DISPLAYS;” PCT Patent Application No. PCT/US2011/027933, filed Mar. 10, 2011, entitled “DISPLAYING 3D CONTENT ON LOW FRAME-RATE DISPLAYS;” PCT Patent Application No. PCT/US2011/027981, filed Mar. 10, 2011, entitled “SHUTTERING THE DISPLAY OF INTER-FRAME TRANSITIONS;” and PCT Patent Application No. PCT/US2011/032549, filed Apr. 14, 2011, entitled “ADAPTIVE 3-D SHUTTERING DEVICES.” The entire content of each of the foregoing applications is incorporated by reference herein.
Filing Document | Filing Date | Country | Kind | 371c Date |
---|---|---|---|---|
PCT/US11/31115 | 4/4/2011 | WO | 00 | 12/20/2011 |
Number | Date | Country | |
---|---|---|---|
61416708 | Nov 2010 | US | |
61416708 | Nov 2010 | US | |
61416708 | Nov 2010 | US | |
61416708 | Nov 2010 | US |
Number | Date | Country | |
---|---|---|---|
Parent | PCT/US11/25262 | Feb 2011 | US |
Child | 13379613 | US | |
Parent | PCT/US11/27175 | Mar 2011 | US |
Child | PCT/US11/25262 | US | |
Parent | PCT/US11/27933 | Mar 2011 | US |
Child | PCT/US11/27175 | US | |
Parent | PCT/US11/27981 | Mar 2011 | US |
Child | PCT/US11/27933 | US | |
Parent | PCT/US11/32549 | Apr 2011 | US |
Child | PCT/US11/27981 | US |