Traditionally, user interaction with a computer has been by way of a keyboard and mouse. Tablet PCs have been developed which enable user input using a stylus and touch sensitive screens have also been produced to enable a user to interact more directly by touching the screen (e.g. to press a soft button). However, the use of a stylus or touch screen has generally been limited to detection of a single touch point at any one time.
Recently, surface computers have been developed which enable a user to interact directly with digital content displayed on the computer using multiple fingers. Such a multi-touch input on the display of a computer provides a user with an intuitive user interface, but detection of the multiple touch events is difficult. An approach to multi-touch detection is to use a camera either above or below the display surface and to use computer vision algorithms to process the captured images. Use of a camera above the display surface enables imaging of hands and other objects which are on the surface but it is difficult to distinguish between an object which is close to the surface and an object which is actually in contact with the surface. Additionally, occlusion can be a problem in such ‘top-down’ configurations. In the alternative ‘bottom-up’ configuration, the camera is located behind the display surface along with a projector which is used to project the images for display onto the display surface which comprises a diffuse surface material. Such ‘bottom-up’ systems can more easily detect touch events, but imaging of arbitrary objects is difficult.
The embodiments described below are not limited to implementations which solve any or all of the disadvantages of known surface computing devices.
The following presents a simplified summary of the disclosure in order to provide a basic understanding to the reader. This summary is not an extensive overview of the disclosure and it does not identify key/critical elements of the invention or delineate the scope of the invention. Its sole purpose is to present some concepts disclosed herein in a simplified form as a prelude to the more detailed description that is presented later.
An interactive surface computer with a switchable diffuser layer is described. The switchable layer has two states: a transparent state and a diffusing state. When it is in its diffusing state, a digital image is displayed and when the layer is in its transparent state, an image can be captured through the layer. In an embodiment, a projector is used to project the digital image onto the layer in its diffusing state and optical sensors are used for touch detection.
Many of the attendant features will be more readily appreciated as the same becomes better understood by reference to the following detailed description considered in connection with the accompanying drawings.
The present description will be better understood from the following detailed description read in light of the accompanying drawings, wherein:
The detailed description provided below in connection with the appended drawings is intended as a description of the present examples and is not intended to represent the only forms in which the present example may be constructed or utilized. The description sets forth the functions of the example and the sequence of steps for constructing and operating the example. However, the same or equivalent functions and sequences may be accomplished by different examples.
The term ‘surface computing device’ is used herein to refer to a computing device which comprises a surface which is used both to display a graphical user interface and to detect input to the computing device. The surface may be planar or may be non-planar (e.g. curved or spherical) and may be rigid or flexible. The input to the computing device may, for example, be through a user touching the surface or through use of an object (e.g. object detection or stylus input). Any touch detection or object detection technique used may enable detection of single contact points or may enable multi-touch input.
The following description refers to a ‘diffuse state’ and a ‘transparent state’ and these refer to the surface being substantially diffusing and substantially transparent, with the diffusivity of the surface being substantially higher in the diffuse state than in the transparent state. It will be appreciated that in the transparent state the surface may not be totally transparent and in the diffuse state the surface may not be totally diffuse. Furthermore, as described above, in some examples, only an area of the surface may be switched (or may be switchable).
An example of the operation of the surface computing device can be described with reference to the flow diagram and timing diagrams 21-23 shown in
The surface computing device as described herein has two modes: a ‘projection mode’ when the surface is in its diffuse state and an ‘image capture mode’ when the surface is in its transparent mode. If the surface 101 is switched between states at a rate which exceeds the threshold for flicker perception, anyone viewing the surface computing device will see a stable digital image projected on the surface.
A surface computing device with a switchable diffuser layer (e.g. surface 101), such as that shown in
The surface 101 may comprise a sheet of Polymer Stabilised Cholesteric Textured (PSCT) liquid crystal and such a sheet may be electrically switched between diffuse and transparent states by applying a voltage. PSCT is capable of being switched at rates which exceed the threshold for flicker perception. In an example, the surface may be switched at around 120 Hz. In another example, the surface 101 may comprise a sheet of Polymer Dispersed Liquid Crystal (PDLC); however the switching speeds which can be achieved using PDLC are generally lower than with PSCT. Other examples of surfaces which can be switched between a diffuse and a transparent state include a gas filled cavity which can be selectively filled with a diffusing or transparent gas, and a mechanical device which can switch dispersive elements into and out of the plane of the surface (e.g. in a manner which is analogous to a Venetian blind). In all these examples, the surface can be electrically switched between a diffuse and a transparent state. Dependent upon the technology used to provide the surface, the surface 101 may have only two states or may have many more states, e.g. where the diffusivity can be controlled to provide many states of different amounts of diffusivity.
In some examples, the whole of the surface 101 may be switched between the substantially transparent and the substantially diffuse states. In other examples, only a portion of the screen may be switched between states. Depending on the granularity of control of the area which is switched, in some examples, a transparent window may be opened up in the surface (e.g. behind an object placed on the surface) whilst the remainder of the surface stays in its substantially diffuse state. Switching of portions of the surface may be useful where the switching speed of the surface is below the flicker threshold to enable an image or graphical user interface to be displayed on a portion of the surface whilst imaging occurs through a different portion of the surface.
In other examples, the surface may not be switched between a diffuse and a transparent state but may have a diffuse and a transparent mode of operation dependent on the nature of the light incident upon the surface. For example, the surface may act as a diffuser for one orientation of polarized light and may be transparent to another polarization. In another example, the optical properties of the surface, and hence the mode of operation, may be dependent on the wavelength of the incident light (e.g. diffuse for visible light, transparent to IR) or the angle of incidence of the incident light. Examples are described below with reference to
The display means in the surface computing device shown in
The projector 102 may project an image irrespective of whether the surface is diffuse or transparent or alternatively, the operation of projector may be synchronized with the switching of the surface such that an image is only projected when the surface is in one of its state (e.g. when it is in its diffuse state). Where the projector is capable of being switched at the same speed as the surface, the projector may be switched directly in synchronization with the surface. In other examples, however, a switchable shutter (or mirror or filter) 104 may be placed in front of the projector and the shutter switched in synchronization with the surface. An example of a switchable shutter is a ferroelectric LCD shutter.
Any light source within the surface computing device, such as projector 102, any other display means or another light source, may be used for one or more of the following, when the surface is transparent:
The image capture device 103 may comprise a still or video camera and the images captured may be used for detection of objects in proximity to the surface computing device, for touch detection and/or for detection of objects at a distance from the surface computing device. The image capture device 103 may further comprise a filter 105 which may be wavelength and/or polarization selective. Whilst images are described above as being captured in ‘image capture mode’ (block 204) when the surface 101 is in its transparent state, images may also be captured, by this or another image capture device, when the surface is in its diffuse state (e.g. in parallel to block 202). The surface computing device may comprise one or more image capture devices and further examples are described below.
The capture of images may be synchronized with the switching of the surface. Where the image capture device 103 can be switched sufficiently rapidly, the image capture device may be switched directly. Alternatively, a switchable shutter 106, such as a ferroelectric LCD shutter, may be placed in front of the image capture device 103 and the shutter may be switched in synchronization with the surface.
Image capture devices (or other optical sensors) within the surface computing device, such as image capture device 103, may also be used for one or more of the following, when the surface is transparent:
Touch detection may be performed through analysis of images captured in either or both of the modes of operation. These images may have been captured using image capture device 103 and/or another image capture device. In other embodiments, touch sensing may be implemented using other techniques, such as capacitive, inductive or resistive sensing. A number of example arrangements for touch sensing using optical sensors are described below.
The term ‘touch detection’ is used to refer to detection of objects in contact with the computing device. The objects detected may be inanimate objects or may be part of a user's body (e.g. hands or fingers).
Touch detection in reflective mode may be performed by illuminating the surface 101 (blocks 401, 403), capturing the reflected light (blocks 402, 204) and analyzing the captured images (block 404). As described above, touch detection may be based on images captured in either or both the projection (diffuse) mode and the image capture (transparent) mode (with
In
In order to reduce or eliminate the effect of ambient IR radiation on the touch detection, an IR filter 605 may be included above the plane in which the TIR occurs. This filter 605 may block all IR wavelengths or in another example, a notch filter may be used to block only the wavelengths which are actually used for TIR. This allows IR to be used for imaging through the surface if required (as described in more detail below).
The use of FTIR, as shown in
The surface computing device shown in
Where touch detection uses detection of light (e.g. IR light) which is deflected by objects on or near the surface (e.g. using FTIR or reflective mode, as described above), the light source may be modulated to mitigate effects due to ambient IR or scattered IR from other sources. In such an example, the detected signal may be filtered to only consider components at the modulation frequency or may be filtered to remove a range of frequencies (e.g. frequencies below a threshold). Other filtering regimes may also be used.
In another example, stereo cameras placed above the switchable surface 101 may be used for touch detection. Use of stereo cameras for touch detection in a top-down approach is described in a paper by S. Izadi et al entitled “C-Slate: A Multi-Touch and Object Recognition System for Remote Collaboration using Horizontal Surfaces” and published in IEEE Conference on Horizontal Interactive Human-Computer Systems, Tabletop 2007. Stereo cameras may be used in a similar way in a bottom-up configuration, with the stereo cameras located below the switchable surface, and with the imaging being performed when the switchable surface is in its transparent state. As described above, the imaging may be synchronized with the switching of the surface (e.g. using a switchable shutter).
Optical sensors within a surface computing device may be used for imaging in addition to, or instead of, using them for touch detection (e.g. where touch detection is achieved using alternative technology). Furthermore, optical sensors, such as cameras, may be provided to provide visible and/or high resolution imaging. The imaging may be performed when the switchable surface 101 is in its transparent state. In some examples, imaging may also be performed when the surface is in its diffuse state and additional information may be obtained by combining the two captured images for an object.
When imaging objects through the surface, the imaging may be assisted by illuminating the object (as shown in
In an example, the surface computing device shown in
There are many different applications for imaging through the surface of a surface computing device and dependent upon the application, different image capture devices may be required. A surface computing device may comprise one or more image capture device and these image capture devices may be of the same or different types.
A high resolution image capture device which operates at visible wavelengths may be used to image or scan objects, such as documents placed on the surface computing device. The high resolution image capture may operate over all of the surface or over only a part of the surface. In an example, an image captured by an IR camera (e.g. camera 103 in combination with filter 105) or IR sensors (e.g. sensors 902, 1002) when the switchable surface is in its diffuse state may be used to determine the part of the image where high resolution image capture is required. For example, the IR image (captured through the diffuse surface) may detect the presence of an object (e.g. object 303) on the surface. The area of the object may then be identified for high resolution image capture using the same or a different image capture device when the switchable surface 101 is in its transparent state. As described above, a projector or other light source may be used to illuminate an object which is being imaged or scanned.
The images captured by an image capture device, (which may be a high resolution image capture device), may be subsequently processed to provide additional functionality, such as optical character recognition (OCR) or handwriting recognition.
In a further example, an image capture device, such as a video camera, may be used to recognize faces and/or object classes. In an example random forest based machine learning techniques that use appearance and shape clues may be used to detect the presence of an object of a particular class.
A video camera located behind the switchable surface 101 may be used to capture a video clip through the switchable surface in its transparent state. This may use IR, visible or other wavelength. Analysis of the captured video may enable user interaction with the surface computing device through gestures (e.g. hand gestures) at a distance from the surface. In another example, a sequence of still images may be used instead of a video clip. The data (i.e. the video or sequence of images) may also be analyzed to enable mapping of detected touch points to users. For example, touch points may be mapped to hands (e.g. using analysis of the video or the methods described above with reference to
Imaging through the switchable surface in its diffuse state enables tracking of objects and recognition of coarse barcodes and other identifying marks. However, use of a switchable diffuser enables recognition of more detailed barcodes by imaging through the surface in its transparent state. This may enable unique identification of a wider range of objects (e.g. through use of more complex barcodes) and/or may enable the barcodes to be made smaller. In an example, the position of objects may be tracked, either using the touch detection technology (which may be optical or otherwise) or by imaging through the switchable surface (in either state) and periodically, a high resolution image may be captured to enable detection of any barcodes on the objects. The high resolution imaging device may operate in IR, UV or visible wavelengths.
A high resolution imaging device may also be used for fingerprint recognition. This may enable identification of users, grouping of touch events, user authentication etc. Depending on the application, it may not be necessary to perform full fingerprint detection and simplified analysis of particular features of a fingerprint may be used. An imaging device may also be used for other types of biometric identification, such as palm or face recognition.
In an example, color imaging may be performed using a black and white image capture device (e.g. a black and white camera) and by sequentially illuminating the object being imaged with red, green and blue light.
The above description relates to imaging of an object directly through the surface. However, through use of mirrors located above the surface, other surfaces may be imaged. In an example, if a mirror is mounted above the surface computing device (e.g. on the ceiling or on a special mounting), both sides of a document placed on the surface may be imaged. The mirror used may be fixed (i.e. always a mirror) or may be switchable between a mirror state and a non-mirror state.
As described above, the whole surface may be switched or only a portion of the surface may be switched between modes. In an example, the location of an object may be detected, either through touch detection or by analysis of a captured image, and then the surface may be switched in the region of the object to open a transparent window through which imaging can occur, e.g. high resolution imaging, whilst the remainder of the surface stays diffuse to enable an image to be displayed. For example, where palm or fingerprint recognition is performed, the presence of a palm or fingers in contact with the surface may be detected using a touch detection method (e.g. as described above). Transparent windows may be opened in the switchable surface (which otherwise remains diffuse) in the areas where the palm/fingertips are located and imaging may be performed through these windows to enable palm/fingerprint recognition.
A surface computing device, such as any of those described above, may also capture depth information about objects that are not in contact with the surface. The example surface computing device shown in
In a first example, the depth capturing element 1102 may comprise a stereo camera or pair of cameras. In another example, the element 1102 may comprise a 3D time of flight camera, for example as developed by 3DV Systems. The time of flight camera may use any suitable technology, including, but not limited to using acoustic, ultrasonic, radio or optical signals.
In another example, the depth capturing element 1102 may be an image capture device. A structured light pattern, such as a regular grid, may be projected through the surface 101 (in its transparent state), for example by projector 102 or by a second projector 1103, and the pattern as projected onto an object may be captured by an image capture device and analyzed. The structured light pattern may use visible or IR light. Where separate projectors are used for the projection of the image onto the diffuse surface (e.g. projector 102) and for projection of the structured light pattern (e.g. projector 1103), the devices may be switched directly or alternatively switchable shutters 104, 1104 may be placed in front of the projectors 102, 1103 and switched in synchronization with the switchable surface 101.
The surface computing device shown in
The projected structured light pattern may be modulated so that the effects of ambient IR or scattered IR from other sources can be mitigated. In such an example, the captured image may be filtered to remove components away from the frequency of modulation, or another filtering scheme may be used.
The surface computing device shown in
In addition to, or instead of, using a filter in the FTIR example, one or both of the IR sources may be modulated and where both are modulated, they may be modulated at different frequencies and the detected light (e.g. for touch detection and/or for depth detection) may be filtered to remove unwanted frequencies.
Depth detection may be performed by varying the diffusivity of the switchable surface 101 because the depth of field is inversely related to how the diffuse the surface is, i.e. the position of cut-off 307 (as shown in
The idea may be further extended to provide additional surfaces, (e.g. two switchable and one semi-diffuse or three switchable surfaces) but if increasing numbers of switchable surfaces are used, the switching rate of the surface and the projector or shutter needs to increase if a viewer is not to see any flicker in the projected images. Whilst the use of multiple surfaces is described above with respect to rear projection, the techniques described may alternatively be implemented with front projection.
Many of the surface computing devices described above comprise IR sensors (e.g. sensors 902, 1002) or an IR camera (e.g. camera 301). In addition to detection of touch events and/or imaging, the IR sensors/camera may be arranged to receive data from a nearby object. Similarly, any IR sources (e.g. sources 305, 901, 1001) in the surface computing device may be arranged to transmit data to a nearby object. The communications may be uni-directional (in either direction) or bidirectional. The nearby object may be close to or in contact with the touch surface, or in other examples, the nearby object may be at a short distance from the touch screen (e.g. of the order of meters or tens of meters rather than kilometers).
The data may be transmitted or received by the surface computer when the switchable surface 101 is in its transparent state. The communication may use any suitable protocol, such as the standard TV remote control protocol or IrDA. The communication may be synchronized to the switching of the switchable surface 101 or short data packets may be used in order to minimize loss of data due to attenuation when the switchable surface 101 is in its diffuse state.
Any data received may be used, for example, to control the surface computing device, e.g. to provide a pointer or as a user input (e.g. for gaming applications).
As shown in
The ability to switch the layer between a diffuse state and a transparent state may have other applications such as providing visual effects (e.g. by enabling floating text and a fixed image). In another example, a monochrome LCD may be used with red, green and blue LEDs located behind the switchable surface layer. The switchable layer, in its diffuse state, may be used to spread the colors across the screen (e.g. where there may be well spread LEDs of each color) as they are illuminated sequentially to provide a color display.
Although the examples described above show an electrically switchable layer 101, in other examples the surface may have a diffuse and a transparent mode of operation dependent upon the nature of the light which is incident upon it (as described above).
The switchable nature of the surface 101 may also enable imaging through the surface from the outside into the device. In an example, where a device comprising an image capture device (such as a mobile telephone comprising a camera) is placed onto the surface, the image capture device may image through the surface in its transparent state. In a multi-surface example, such as shown in
When a device is placed on the surface of a surface computing device, the surface computing device displays an optical indicator, such as a light pattern on the lower of the two surfaces 101. The surface computing device then runs a discovery protocol to identify wireless devices within range and sends messages to each identified device to cause them to use any light sensor to detect a signal. In an example the light sensor is a camera and the detected signal is an image captured by the camera. Each device then sends data identifying what was detected back to the surface computing device (e.g. the captured image or data representative of the captured image). By analyzing this data, the surface computing device can determine which other device detected the indicator that it displayed and therefore determine if the particular device is the device which is on its surface. This is repeated until the device on the surface is uniquely identified and then pairing, synchronization or any other interaction can occur over the wireless link between the identified device and the surface computing device. By using the lower surface to display the optical indicator, it is possible to use detailed patterns/icons because the light sensor, such as a camera, is likely to be able to focus on this lower surface.
With the surface in its transparent state (as switched in block 203), an image is captured through the surface (block 204). This image capture (in block 204) may include illumination of the surface (e.g. as shown in block 403 of
The process may be repeated, with the surface (or part thereof) being switched between diffuse and transparent states at any rate. In some examples, the surface may be switched at rates which exceed the threshold for flicker perception. In other examples, where image capture only occurs periodically, the surface may be maintained in its diffuse state until image capture is required and then the surface may be switched to its transparent state.
Computing-based device 1600 comprises one or more processors 1601 which may be microprocessors, controllers or any other suitable type of processors for processing computing executable instructions to control the operation of the device in order to operate as described above (e.g. as shown in
The application software may comprise one or more of:
The computer executable instructions, such as the operating system 1602 and application software 1603-1611, may be provided using any computer-readable media, such as memory 1612. The memory is of any suitable type such as random access memory (RAM), a disk storage device of any type such as a magnetic or optical storage device, a hard disk drive, or a CD, DVD or other disc drive. Flash memory, EPROM or EEPROM may also be used. The memory may also comprise a data store 1613 which may be used to store captured images, captured depth data etc.
The computing-based device 1600 also comprises a switchable surface 101, a display means 1615 and an image capture device 103. The device may further comprise one or more additional image capture devices 1614 and/or a projector or other light source 1616.
The computing-based device 1600 may further comprise one or more inputs (e.g. of any suitable type for receiving media content, Internet Protocol (IP) input etc), a communication interface and one or more outputs such as an audio output.
Whilst the description above refers to the surface computing device being orientated such that the surface is horizontal (with other elements being described as above or below that surface), the surface computing device may be orientated in any manner. For example, the computing device may be wall mounted such that the switchable surface is vertical.
There are many different applications for the surface computing devices described herein. In an example, the surface computing device may be used in the home or in a work environment, and/or may be used for gaming. Further examples include use within (or as) an automated teller machine (ATM), where the imaging through the surface may be used to image the card and/or to use biometric techniques to authenticate the user of the ATM. In another example, the surface computing device may be used to provide hidden close circuit television (CCTV), for example in places of high security, such as airports or banks. A user may read information displayed on the surface (e.g. flight information at an airport) and may interact with the surface using the touch sensing capabilities, whilst at the same time, images can be captured through the surface when it is in its transparent mode.
Although the present examples are described and illustrated herein as being implemented in a surface computing system, the system described is provided as an example and not a limitation. As those skilled in the art will appreciate, the present examples are suitable for application in a variety of different types of computing systems.
The term ‘computer’ is used herein to refer to any device with processing capability such that it can execute instructions. Those skilled in the art will realize that such processing capabilities are incorporated into many different devices and therefore the term ‘computer’ includes PCs, servers, mobile telephones, personal digital assistants and many other devices.
The methods described herein may be performed by software in machine readable form on a tangible storage medium. The software can be suitable for execution on a parallel processor or a serial processor such that the method steps may be carried out in any suitable order, or simultaneously.
This acknowledges that software can be a valuable, separately tradable commodity. It is intended to encompass software, which runs on or controls “dumb” or standard hardware, to carry out the desired functions. It is also intended to encompass software which “describes” or defines the configuration of hardware, such as HDL (hardware description language) software, as is used for designing silicon chips, or for configuring universal programmable chips, to carry out desired functions.
Those skilled in the art will realize that storage devices utilized to store program instructions can be distributed across a network. For example, a remote computer may store an example of the process described as software. A local or terminal computer may access the remote computer and download a part or all of the software to run the program. Alternatively, the local computer may download pieces of the software as needed, or execute some software instructions at the local terminal and some at the remote computer (or computer network). Those skilled in the art will also realize that by utilizing conventional techniques known to those skilled in the art that all, or a portion of the software instructions may be carried out by a dedicated circuit, such as a DSP, programmable logic array, or the like.
Any range or device value given herein may be extended or altered without losing the effect sought, as will be apparent to the skilled person.
It will be understood that the benefits and advantages described above may relate to one embodiment or may relate to several embodiments. The embodiments are not limited to those that solve any or all of the stated problems or those that have any or all of the stated benefits and advantages. It will further be understood that reference to ‘an’ item refers to one or more of those items.
The steps of the methods described herein may be carried out in any suitable order, or simultaneously where appropriate. Additionally, individual blocks may be deleted from any of the methods without departing from the spirit and scope of the subject matter described herein. Aspects of any of the examples described above may be combined with aspects of any of the other examples described to form further examples without losing the effect sought.
The term ‘comprising’ is used herein to mean including the method blocks or elements identified, but that such blocks or elements do not comprise an exclusive list and a method or apparatus may contain additional blocks or elements.
It will be understood that the above description of a preferred embodiment is given by way of example only and that various modifications may be made by those skilled in the art. The above specification, examples and data provide a complete description of the structure and use of exemplary embodiments of the invention. Although various embodiments of the invention have been described above with a certain degree of particularity, or with reference to one or more individual embodiments, those skilled in the art could make numerous alterations to the disclosed embodiments without departing from the spirit or scope of this invention.