There are a number of different technologies which have been used to create three-dimensional (3D) displays. Some of the technologies simulate depth using a planar display screen through visual effects and require glasses to be worn by onlookers. Volumetric displays, however, create a 3D light field within a volume which can be viewed by an onlooker without any requirement for special glasses. These volumetric displays may also be described as autostereoscopic because they create a 3D image which is visible to the unaided eye.
In an example of a volumetric display, different images (or different views of an object) are projected synchronously onto a rotating holographic mirror. The projection of the different images builds up a 3D image which can be seen from different viewpoints. Another example of a volumetric display uses a stack of switchable diffusers. Different depth images are projected in sequence onto the stack of diffusers, in order to simulate the 3D scene data which would be visible at different slices through the volume. Both these examples make use of the speed of refresh and user perceived visual continuity to enable a 3D image to be fused together.
The embodiments described below are not limited to implementations which solve any or all of the disadvantages of known volumetric displays.
The following presents a simplified summary of the disclosure in order to provide a basic understanding to the reader. This summary is not an extensive overview of the disclosure and it does not identify key/critical elements of the invention or delineate the scope of the invention. Its sole purpose is to present some concepts disclosed herein in a simplified form as a prelude to the more detailed description that is presented later.
A volumetric display system which enables user interaction is described. In an embodiment, the system consists of a volumetric display and an optical system. The volumetric display creates a 3D light field of an object to be displayed and the optical system creates a copy of the 3D light field in a position away from the volumetric display and where a user can interact with the image of the object displayed. In an embodiment, the optical system involves a pair of parabolic mirror portions.
Many of the attendant features will be more readily appreciated as the same becomes better understood by reference to the following detailed description considered in connection with the accompanying drawings.
The present description will be better understood from the following detailed description read in light of the accompanying drawings, wherein:
Like reference numerals are used to designate like parts in the accompanying drawings.
The detailed description provided below in connection with the appended drawings is intended as a description of the present examples and is not intended to represent the only forms in which the present example may be constructed or utilized. The description sets forth the functions of the example and the sequence of steps for constructing and operating the example. However, the same or equivalent functions and sequences may be accomplished by different examples.
It will be appreciated that any reference to generation of a 3D display of an object (or a 3D light field of an object) herein is by way of example only and the 3D display may show one or more objects which may be static or animated objects (or a combination thereof).
The rotation of the angled diffuser 102 may be achieved using any suitable mechanism. In the example 100 shown in
The volumetric display 100 shown in
In order to create a fully autostereoscopic volumetric display (i.e. one where the view seen does change with viewing height), the display shown in
The head tracking or eye detection performed by the volumetric display may use any suitable mechanism. In an example, computer vision techniques may be used to detect heads, eyes and/or faces or infra-red illumination may be used with the optical sensor 202 detecting bright reflections from viewers' eyes.
Having detected, using the optical sensor 202, the positions of viewers' heads relative to the display 200, the projected views of the object may be adjusted based on the known viewing height(s). In the simplified schematic diagram 210 shown in
In addition to, or instead of, using the known user positions (as a result of the head/eye tracking) to determine viewing height and project corrected images of the object, the known user positions may be used to focus the use of the available projection (or display) frame rate on the range of viewing angles where users are located, e.g. by reducing the angular separation of the projected frames, by increasing the refresh rate for any particular viewing angle and/or by only generating images (for projection/display) for the appropriate viewing arc. For example, if viewers are known to only exist over a 180° angular region (which may be referred to as the ‘viewing arc’ e.g. in the plane of rotation of the diffuser 102) and the frame rate is limited to 180 frames per second, these 180 frames may be used project a different view of the object at 1° increments in viewing angle (rather than 2° over the entire 360°) or to double the refresh rate, or in providing some combination of reduced angular separation of projected views and increased refresh rate. In another example, images may only be generated for S views either side of the detected position of a user (i.e. ±S views, where S is an integer and may be set for a particular system).
Although in
As with known volumetric displays, the volumetric displays 100, 200 shown in
In a first example implementation 310, which is shown in
As described above, the example implementation 310 in
Although the volumetric display shown in the example implementation 310 in
The example 310 in
Although the volumetric displays shown in
In another example, a volumetric display may be used which comprises an emissive display (e.g. instead of one which comprises a projector, as in
Although
The volumetric display systems described above and shown in
In a variation of the systems shown in
In the examples shown in
It will be appreciated that whilst the segments are described as being spun around the central axis, in further examples they may be panned through a smaller arc and hence provide a smaller viewing angle (in an analogous manner to the non-spinning portions shown in
Where non-spinning parabolic mirrors, or sections thereof, are used, the parabolic mirrors may be semi-rigid or rigid. However, in the examples shown in
In some implementations of the volumetric display system, the projected views of an object (i.e. the different projected images for each viewing angle) may be distorted. This distortion is a result of the fact that the 3D light field formed by the volumetric display 301 cannot all be formed at the focal point or focal plane of the upper parabolic mirror 311 (or 711 in the truncated examples). This distortion may for example result in an expansion of the radial size of the object with distance from this focal point. In many applications, any distortion may not be noticeable or may not affect the operation of the system. However, some systems may use one or more different techniques to compensate for any distortion and two example techniques are described herein. In a first example, the projected images (or the displayed images, where an emissive display 511 is used) of the object may be modified to ensure that the resultant 3D object which is visible to a viewer is not distorted, e.g. performing a compensatory pre-distortion by radially inwardly compressing the top of the object in the images as projected. In another example, the shape of the diffuser 102 or mirror 402 may be selected to compensate for this effect (e.g. such that the diffusing/reflective surface is not planar).
As described above, many volumetric displays do not display a different image if the viewer moves their head vertically but instead the same view is visible irrespective of the viewing height. One solution to this is to include the head/eye tracking capability described above (and shown in
The transformation optics 302 described above are bidirectional and as a result the optics 302 may be used in a sensing system.
Although in
The sensing system 900 shown in
The sensing apparatus 901 may use any wavelength(s) for sensing and the apparatus is not restricted to visible wavelengths. In an example, infra-red (IR) sensing may be used (e.g. detectors 522 may be IR detectors). Furthermore, although the above examples show a sensing apparatus 901 which senses optically via the transformation optics, in other examples alternative sensing techniques may be used (e.g. ultrasonic sensing). Although
There are many different applications for a combined volumetric display and sensing system 1000. In an example, the system 1000 may be used to image an object and then subsequently to display a 3D representation of the imaged object. In this case, multiple views of the object may be captured by, for example, rotating the object or the sensing apparatus. In order that both the top and bottom views of the object are captured, a top down camera may also be used. The top down camera may comprise just a camera or alternatively a second set of transformation optics (e.g. another pair of parabolic mirrors) and a camera may be used.
In another example, the sensing may be used to enable user input through ‘touching’ the displayed object and this user input may result in a change of the displayed object and/or control of a software application, device etc. It will be appreciated that the user does not actually touch the displayed object but instead can physically interact with the 3D light field which is created by the system, e.g. with the virtual 3D object in the region 315 shown in the FIGS. There are many ways in which the displayed object may be changed in response to user input and examples include, but are not limited to:
In a further example, the sensing apparatus 901 may be used to sense the environment around the system 1000 and modify the displayed object as a result. For example, the displayed object may be modified to include virtual reflections (e.g. which mimic specular or diffuse reflections) of objects in the environment (e.g. where the displayed object is reflective or partially reflective, such as a coin). In an example, the synthesized reflections may include appropriate angular reflections of the users and/or the environment. The displayed object may also be modified to take into consideration light sources or provide shadows as a result of sensing the environment (e.g. where the user places their hand between the virtual light field and a light source in the environment). In another example, the system could render the 3D object taking into account the surrounding lighting conditions such that shadows are realistically shown for the environment the display is placed in. An example of this would be where a user shines a simple torch onto the viewed object and those parts of the displayed 3D object facing away from the torch are made dark (shadowed) whereas parts of the object being displayed nearest to the torch appear brightly illuminated. Incorporation of aspects of the sensed environment in this way (e.g. through synthesizing shadows, reflections and other optical features correctly with respect to the surrounding environment and the viewed direction) increases the realism of the displayed object. In an example, a light field camera or multiple 3D cameras may be used to capture images (e.g. multiple) images of the environment and the captured images may be processed to determine how the displayed objects should be modified. Other image processing techniques may also be used (e.g. to remove any blurring in the captured images of the environment).
In addition to, or instead of, modifying the displayed object (e.g. modifying the projected views of the object), the user inputs which may be detected using the sensing apparatus 901 may result in the user being provided with haptic feedback. In an example, the system 1000 may further comprise air jets or other mechanism (e.g. haptic feedback apparatus 1013) to provide physical feedback to the user when ‘touching’ the displayed object. The air jets may be used to provide a user with a sensation of touch and in an example, the air jets may be generated within the optical cavity of the volumetric display 301.
The interaction between the user and the displayed 3D object need not only be through use of the user's hands and fingers. In some examples, a user may use a physical object (referred to herein as an ‘interaction object’ e.g. a pointer, stylus or ruler) to interact with the displayed 3D object and in such an example, the haptic feedback (e.g. the air jets) may be used to provide gentle force on the physical object when it appears (to the user) to come into contact with the displayed object. This use of haptic feedback increases the realism of the display and the user interaction with the displayed object. Where an interaction object is used, the object may be semi-transparent so that the user's view of the displayed object is not completely obscured by the interaction and to minimize any shadowing of the mirror/diffuser.
Another technique which may be used to increase the realism of any displayed 3D object is to combine the use of computer generated images with use of real objects. In an example of this, a real object may be placed in the bottom of the transformation optics (e.g. in the bottom of the cavity created by the parabolic mirrors). The transformation optics result in the real object being visible to a viewer just above the aperture in the parabolic mirrors (in a similar manner to the web camera shown in
Real objects may also be used in the sensing system 900 shown in
Although the examples described above show the two parabolic mirrors (311, 312 or 711, 712) as being located in contact with each other or in close proximity, this is by way of example only. In other examples, the parabolic mirrors may be spaced apart. In such an example, the mirrors are still arranged such that the base of the lower parabolic mirror lies at the focal plane of the upper parabolic mirror and the base of the upper parabolic mirror lies at the focal plane of the lower parabolic mirror, but use of mirrors of longer focal lengths enables the mirrors to be physically separated.
The parabolic mirrors (or segments thereof) in the examples described above may be fully reflective (e.g. at all wavelengths) or may be reflective at only some wavelengths. In an example, the mirrors may be reflective to visible light but transparent to infra-red light and in such a situation the sensing apparatus 901 may comprise an infra-red camera placed underneath the transformation optics 302 and which images through the transformation optics. It will be appreciated that this is just one example of wavelength selective reflectivity and imaging and other wavelengths may alternatively be used. In a further variation, the parabolic mirrors may comprise switchable mirrors (e.g. mirrors which can be switched between a reflective and a transmissive state).
Computing-based device 1200 comprises one or more processors 1201 which may be microprocessors, controllers or any other suitable type of processors for processing computing executable instructions to control the operation of the device in order to generate and/or display different views of an object, sense user interaction, control a software program based on the sensed user interaction etc. Platform software comprising an operating system 1202 or any other suitable platform software may be provided at the computing-based device to enable application software 1203-1205 to be executed on the device. The application software may include display software 1204 which is arranged to display the different views of an object using a volumetric display 301 and may further be arranged to generate these different views. The application software may include sensing software 1205 which is arranged to interpret signals provided by a sensing apparatus 901, e.g. to convert them into user inputs for other application software 1203 or for the operating system 1202.
The computer executable instructions may be provided using any computer-readable media, such as memory 1206. The memory may be of any suitable type such as random access memory (RAM), a disk storage device of any type such as a magnetic or optical storage device, a hard disk drive, or a CD, DVD or other disc drive. Flash memory, EPROM or EEPROM may also be used. The memory may also provide an image store 1207 for storing the various views of an object to be displayed and for storing any images captured by the sensing apparatus 901. Alternatively the views of an object may not be stored locally but may be generated dynamically (e.g. by the computing-based device) and/or transmitted to the computing-based device 1200 from another device (e.g. a remote server).
The computing-based device may comprise a display interface 1208 which interfaces to the volumetric display 301 and an input 1209 which receives signals from a sensing apparatus 901. Further inputs may also be provided. The device may also comprise a communication interface 1210 (e.g. for receiving views for display from a remote server) and one or more outputs 1211.
The systems and apparatus described above allow the creation of a true 3D volumetric image which supports direct user interaction. Although the present examples are described and illustrated herein as generating a 3D image, the examples are also suitable for generating a 2D image which supports direct user interaction in a similar manner. In some examples, a 3D image may be generated but the sensing apparatus may operate in 2D, or vice versa.
It will be appreciated that the orientations shown in the FIGS. and described above (e.g. using terminology such as horizontal, vertical, upper, lower etc) are by way of example only and the volumetric display systems described above may be positioned in any orientation which is required. For example, the arrangement shown in
The term ‘computer’ is used herein to refer to any device with processing capability such that it can execute instructions. Those skilled in the art will realize that such processing capabilities are incorporated into many different devices and therefore the term ‘computer’ includes PCs, servers, mobile telephones, personal digital assistants and many other devices.
The methods described herein may be performed by software in machine readable form on a tangible storage medium. The software can be suitable for execution on a parallel processor or a serial processor such that the method steps may be carried out in any suitable order, or substantially simultaneously.
This acknowledges that software can be a valuable, separately tradable commodity. It is intended to encompass software, which runs on or controls “dumb” or standard hardware, to carry out the desired functions. It is also intended to encompass software which “describes” or defines the configuration of hardware, such as HDL (hardware description language) software, as is used for designing silicon chips, or for configuring universal programmable chips, to carry out desired functions.
Those skilled in the art will realize that storage devices utilized to store program instructions can be distributed across a network. For example, a remote computer may store an example of the process described as software. A local or terminal computer may access the remote computer and download a part or all of the software to run the program. Alternatively, the local computer may download pieces of the software as needed, or execute some software instructions at the local terminal and some at the remote computer (or computer network). Those skilled in the art will also realize that by utilizing conventional techniques known to those skilled in the art that all, or a portion of the software instructions may be carried out by a dedicated circuit, such as a DSP, programmable logic array, or the like.
Any range or device value given herein may be extended or altered without losing the effect sought, as will be apparent to the skilled person.
It will be understood that the benefits and advantages described above may relate to one embodiment or may relate to several embodiments. The embodiments are not limited to those that solve any or all of the stated problems or those that have any or all of the stated benefits and advantages. It will further be understood that reference to ‘an’ item refers to one or more of those items.
The steps of the methods described herein may be carried out in any suitable order, or simultaneously where appropriate. Additionally, individual blocks may be deleted from any of the methods without departing from the spirit and scope of the subject matter described herein. Aspects of any of the examples described above may be combined with aspects of any of the other examples described to form further examples without losing the effect sought.
The term ‘comprising’ is used herein to mean including the method blocks or elements identified, but that such blocks or elements do not comprise an exclusive list and a method or apparatus may contain additional blocks or elements.
It will be understood that the above description of a preferred embodiment is given by way of example only and that various modifications may be made by those skilled in the art. The above specification, examples and data provide a complete description of the structure and use of exemplary embodiments of the invention. Although various embodiments of the invention have been described above with a certain degree of particularity, or with reference to one or more individual embodiments, those skilled in the art could make numerous alterations to the disclosed embodiments without departing from the spirit or scope of this invention.
Number | Name | Date | Kind |
---|---|---|---|
3647284 | Elings et al. | Mar 1972 | A |
4323952 | Proske et al. | Apr 1982 | A |
4743748 | O'Brien | May 1988 | A |
4843568 | Krueger et al. | Jun 1989 | A |
5572375 | Crabtree, IV | Nov 1996 | A |
5644369 | Jachimowicz et al. | Jul 1997 | A |
5754147 | Tsao et al. | May 1998 | A |
5990990 | Crabtree | Nov 1999 | A |
6415050 | Stegmann et al. | Jul 2002 | B1 |
6487020 | Favalora | Nov 2002 | B1 |
6554430 | Dorval et al. | Apr 2003 | B2 |
6765566 | Tsao | Jul 2004 | B1 |
6775014 | Foote et al. | Aug 2004 | B2 |
6806849 | Sullivan | Oct 2004 | B2 |
7134080 | Kjeldsen et al. | Nov 2006 | B2 |
7190518 | Kleinberger et al. | Mar 2007 | B1 |
7239293 | Perlin et al. | Jul 2007 | B2 |
7677732 | Moro et al. | Mar 2010 | B2 |
7884734 | Izadi et al. | Feb 2011 | B2 |
7924272 | Boer et al. | Apr 2011 | B2 |
7980957 | Schumm et al. | Jul 2011 | B2 |
20020084951 | McCoy | Jul 2002 | A1 |
20040192430 | Burak et al. | Sep 2004 | A1 |
20050052427 | Wu et al. | Mar 2005 | A1 |
20050064936 | Pryor | Mar 2005 | A1 |
20050180007 | Cossairt et al. | Aug 2005 | A1 |
20060007124 | Dehlin | Jan 2006 | A1 |
20060010400 | Dehlin et al. | Jan 2006 | A1 |
20060036944 | Wilson | Feb 2006 | A1 |
20070046643 | Hillis et al. | Mar 2007 | A1 |
20070201863 | Wilson et al. | Aug 2007 | A1 |
20070291035 | Vesely et al. | Dec 2007 | A1 |
20080029691 | Han | Feb 2008 | A1 |
20080231926 | Klug et al. | Sep 2008 | A1 |
20080281851 | Izadi et al. | Nov 2008 | A1 |
20090025022 | Blatchley et al. | Jan 2009 | A1 |
20090237576 | Nelson et al. | Sep 2009 | A1 |
20100149182 | Butler et al. | Jun 2010 | A1 |
20110128555 | Rotschild et al. | Jun 2011 | A1 |
Number | Date | Country |
---|---|---|
1852768 | Nov 2007 | EP |
WO2005069641 | Jul 2005 | WO |
Entry |
---|
Hodges et al. “ThinSight: Versatile Multi-touch Sensing for Thin Form-factor Displays” ACM UIST 07, Oct. 7-10, 2007 Newport Rhode Island USA. |
Blundell, et al., Creative 3-D Display and Interaction Interfaces: A Trans-Disciplinary Approach, Wiley & Sons, 2006. |
Blundell, Enhanced Visualization: Making Space for 3-D Images, Wiley & Sons, 2007. |
Jones, et al., “Rendering for an Interactive 360* Light Field Display”, retrieved on Jun. 18, 2008 at <<http://gl.ict.usc.edu/Research/3DDisplay/>>, SIGGRAPH 2007 Papers Proceedings SIGGRAPH 2007 Emerging Technologies, 3 pages. |
Mirage by OPTI-GONE International, retrieved on the internet on Jan. 13, 2009 at <<http://www.optigone.com>>. |
Reinhart, et al., “A Projection-based User Interface for Industrial Robots”, retrieved on Jun. 18, 2008 at <<http://ieeexplore.ieee.org/iel5/4373911/4373912/04373930.pdf?tp=&isnumber=4373912&arnumber=4373930&htry=3>>, VECIMS 2007—IEEE International Conference on Virtual Environments, Human-Computer Interfaces, and Measurement Systems, Ostuni, Italy, Jun. 25-27, 2007, pp. 67-71. |
Robinett, et al., “The Visual Display Transformation for Virtual Reality”, retrieved on Jun. 18, 2008 at <<http://www.cs.jhu.edu/˜cohen/VirtualWorlds/media/pdf/Robinett—Holloway—94-031.pdf>>, TR94-031, Sep. 1994, 30 pages. |
Cao, et al., “Multi-User Interaction using Handheld Projectors”, retrieved on Jul. 30, 2010 at <<http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.91.9596&rep=rep1&type=pdf>>, ACM, Proceedings of Symposium on User Interface Software and Technology (UIST), Newport, Rhode Island, Oct. 2007, pp. 43-52. |
Clark, “3D Without Glasses, That's Crazy Talk!”, retrieved on Jul. 30, 2010 at <<http://www.wegotserved.com/2010/07/11/3d-glasses-crazy-talk/>>, We Got Served, 2007, pp. 1-7. |
Dodgson, “Autostereoscopic 3D Displays”, retrieved on Jul. 30, 2010 at <<http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=01492263>>, IEEE Computer Society, Computer Journal, vol. 38, No. 8, Aug. 2005, pp. 31-36. |
“Glasses-Free Tabletop 3D Display for omnidirectional viewing by multiple viewers has been successfully developed—Floating virtual 3D objects appearing on a flat tabletop surface-”, retrieved on Jul. 30, 2010 at <<http://www2.nict.go.jp/pub/whatsnew/press/h22/100701/100701—e.html#link02>>, NICT, Jul. 1, 2010, pp. 1-5. |
Grabham, “SecondLight explained: MS Surface on steroids, We get hands on with Microsoft's 3D interface and display”, retrieved on Jul. 30, 2010 at <<http://www.techradar.com/news/computing/secondlight-explained-ms-surface-on-steroids-598017>>, Techradar, May 11, 2009, pp. 1-2. |
Izadi et al., “C-Slate: A Multi-Touch and Object Recognition System for Remote Collaboration Using Horizontal Surfaces”, IEEE, 2007, pp. 8. |
Jones, et al., “Rendering for an Interactive 360 degree Light Field Display”, retrieved on Jul. 30, 2010 at <<http://gl.ict.usc.edu/Research/3DDisplay/>>, University of Southern California, ICT Graphics Lab, 2007, pp. 1-4. |
Malik, et al., “Visual Touchpad: A Two-handed Gestural Input Device”, ACM, 2004, pp. 8. |
PureDepth—“What's MLD? What Every LCD Should Be . . . ,” retrieved from <<http://webarchive.org/web/20070717212430/www.puredepth.com/what.html>> on Feb. 9, 2009. |
Wellner, “Interacting with Paper on the Digitaldesk”, ACM, vol. 36, No. 7, 1993, pp. 87-96. |
Wilson, “PlayAnywhere: A Compact Interactive Tabletop Projection-Vision System”, ACM, 2005, p. 10. |
Wilson, “TouchLight: An Imaging Touch Screen and Display for Gesture-Based Interaction”, ACM, 2004, pp. 8. |
Wu, et al., “Multi-Finger and Whole Hand Gestural Interaction Techniques for Multi-User Tabletop Displays”, ACM, vol. 5, Issue 2, 2003, pp. 193-202. |
Office Action for U.S. Appl. No. 12/040,629, mailed on Mar. 6, 2012, Shahram Izadi, “Interactive Surface Computer with Switchable Diffuser”, 9 pages. |
Number | Date | Country | |
---|---|---|---|
20100149182 A1 | Jun 2010 | US |