Display with built in 3D sensing

Abstract
Information from execution of a vision processing module may be used to control a 3D vision system.
Description
BACKGROUND OF THE INVENTION

1. Field of the Invention


The present invention generally related to vision systems. More specifically, the present invention related to a gesture driven vision system that allows a computing device to perceive the physical world and related interactions in three-dimensions.


2. Description of the Related Art


Vision systems that allow computers to perceive the physical world in three dimensions are being developed for use in a variety of applications. Among those applications are gesture interfaces. While attempts have been made for gesture control to supersede the use of remote controls used in televisions and television accessories such as game controllers for video game systems, such attempts have met with little to no success.


These prior art systems have been limited by their ability (or lack thereof) to track the hands or some other appendage of a user in a real-world setting. Complications with such interfaces and their inability to process information include the fact that users may sit in various locations around a room and not directly in front of a television. Other problems arise as a result of variations in ambient light and background.


SUMMARY OF THE PRESENTLY CLAIMED INVENTION

In a first claimed embodiment, a system comprising a 3D vision system configured to provide vision data; a computer in communication with the 3D vision system, the computer configured to process the vision data; and a display in communication with the computer, the display configured to change in response to the processed vision data is disclosed.





BRIEF DESCRIPTION OF THE FIGURES


FIG. 1 illustrates the flow of information in a three dimensional vision system.



FIG. 2 illustrates an exemplary configuration of a three dimensional vision system in a display device.



FIG. 3 illustrates an embodiment of the three dimensional vision system as referenced in the context of FIG. 2.



FIG. 4 illustrates an exemplary illuminator as may be implemented in the context of the present three dimensional vision system.





DETAILED DESCRIPTION

Exemplary embodiments of the present invention include a display with a built-in 3D vision system and computer. Potential implementations of the 3D vision hardware include, but are not limited to stereo vision, structured light accompanied by one or two cameras, laser rangefinders, and time-of-flight cameras.


The computer may take many forms including, but not limited to, a video game console, personal computer, or a media player, such as a digital video recorder, or DVD player. Vision software may run on a separate embedded computer, a main computer, or some combination of the two. Various processors, memory, interfaces (both user and network) as known in the art may be included to allow for exchanges of information and execution of various software modules, engines, and applications.


In general, the vision software may include perspective transforms, person segmentation, body tracking, hand tracking, gesture recognition, touch detection, and face tracking. In the case of a stereo vision system, the vision software may also include stereo processing, generating depth from disparity.


A variety of other software modules may use vision data. An interactive entertainment engine may use the vision data to create interactive games that can be played using body motion. A TV controller may use the vision data to allow the user to control the display's settings. A media player may use the vision data to control the playing of digital media such as a DVD or MP3. A user analysis module may use the vision data to determine who is near the display and how they are behaving. Any of the aforementioned modules may use an internet connection or send images to the display for display to a user.



FIG. 1 illustrates the flow of information in a three dimensional vision system. FIG. 1 shows the flow of information according to one embodiment of the 3D vision system. The 3D vision system 101 provides data to a computer 102 such as the main computer, the embedded computer, or a combination computer system. Each stage of vision processing may occur within the 3D vision system 101, within a vision processing module 103, or both.


Information from execution of the vision processing module 103 may be used to control the 3D vision system 101. For example, the vision processing module 103 may send signals to alter the gain level of the cameras in the vision system 101 in order to properly ‘see’ objects in the camera's view. The output of the vision processing in the 3D vision system 101 and/or from execution of the vision processing module 103 may be passed to a display controller 104, an interactive entertainment engine 105, a user analysis module 106, and/or a media player 107. These modules (104, 105, 106, 107) may be designed to use the vision data to track or recognize user positions, hand positions, head positions, gestures, body shapes, and depth images.


The display controller 104 may use vision data from execution of the vision processing module 103 to control the display 110. For example, specific gestures detected by the vision processing module 103, such as a thumbs up or thumbs down, may be used to make specific changes to the display 110 such as turning the display on or off, adjusting the audio volume, changing the channel or input, or adjusting image parameters. Functionality traditionally controlled via a remote control may be controlled via gestures. The display controller 104 may further change the brightness of the display 110 or other parameters based on ambient light conditions detected by the 3D vision system 101.


The interactive entertainment engine 105 may use vision data to drive interactive graphical content. Examples of the interactive content engines 105 include Adobe's Flash platform and Flash content, the Reactrix Effects Engine and Reactrix content, and a computer game or console video game.


The media player 107 may use vision data from execution of the vision processing module 103 in order to control the playing of image, audio, or video media on the display 110. For example, specific gestures detected by execution of the vision processing module 103, such as a thumbs up or thumbs down, may be used to control the play process. Examples of controlling the play process include triggering a fast forward or pause, or navigating a playlist or DVD menu.


The user analysis module 106 may be executed to use vision data in order to identify users and track their behavior. Identification may occur using face recognition based on data generated from the execution of the vision processing module 103. Alternatively, identification may be established using a login process.


Once identification has occurred, identification of a particular user may be maintained using body tracking software so that each user's identification remains known regardless of whether their face is visible or whether they switch locations. User behavior may also be observed. For example, user position, movement, posture and facial expression may be tracked in order to determine if each user is looking at the display, and what emotion and level of interest they are experiencing relative to the content. This information may be sent to the other modules (e.g., 104, 105, 107).


Data from the user analysis module 106 may be used in execution of the other modules (e.g., 104, 105, 107). For example, the display controller 104 may use this data to automatically switch to a particular user's preferred settings when they enter the room. Furthermore the display controller 104 may go into an energy saving mode or turn off entirely if no one is present or paying attention for specified period of time. The interactive entertainment engine 105 may use this data to do a variety of things, including but not limited to bringing up the identified user's profile when they begin to play, mapping each user's actions to a specific player in the game, pausing the game when the user is not paying attention, and altering the game based on the user's emotions such as by making the game harder if they look frustrated or easier if they look relaxed.


The media player 107 uses this data to do a variety of things, including but not limited to bringing up the identified user's profile when they are looking at their content library, pausing a song, movie, or slideshow if the user walks away, or altering the content played based on the users' emotions. Any of the modules associated with the computer 102 may take advantage of an Internet or other network connection 108 to send and/or receive data. This connection may take a variety of forms, including but not limited to a cellular broadband connection, a DSL connection, or a 802.11 wireless connection.


Video images generated through execution of any of the modules (e.g., 104, 105, 106, 107) may be rendered on graphics hardware 109 and sent to the display 110 for displaying to a user. The modules discussed herein (e.g., 104, 105, 106, 107) may also provide the vision processing module 103 and/or the 3D vision system 101 with commands in order to optimize how vision data is gathered.



FIG. 2 illustrates an exemplary configuration of a three dimensional vision system in a display device. FIG. 2 shows a simplified view of one possible configuration of the hardware. Vision hardware 201 in FIG. 2 is built into the border of a display 202 A separate computer 203 takes input from the vision hardware 201 and provides video (and potentially audio) content for display on the display 202. The vision hardware 201 is able to see objects in an interactive space 204. One or more users 205 may be in the interactive space 204 in order to interact with the vision interface.


A front border 207 of the display 202 allows the vision hardware 201 a view of the interactive space 204. This may be accomplished in a variety of ways. For example, the vision hardware 201 may operate on infrared light and the front border 207 may consist primarily of a material that is transparent to infrared light. Some materials that are transparent to infrared are also opaque to visible light and appear black, making the vision hardware 202 invisible to the human eye and preserving the aesthetics of the display 202. Examples of such materials include the Kodak Wratten #87C filter.


As long as the portion of the border 207 in front of the vision system 201 is transparent to light from the vision system 201, it does not matter whether the rest of the border 207 is covered in such a material. For aesthetic reasons, the entirety of the border 207 may be covered with the IR-transparent material. Alternately, the border 207 may include holes that enable the vision system 201 to ‘see’ through border 207. The vision system 201 and/or the computer 203 may alternatively be in separate enclosures outside of the display 202.



FIG. 3 illustrates an embodiment of the three dimensional vision system as referenced in the context of FIG. 2. The displayed configuration shows a stereo vision system. Note that power and data cables have been omitted from the diagram for clarity.


A vision system 301 is installed inside the enclosure of a display 308. The vision system 301 includes one or more illuminators 302. Each of the illuminators 302 creates light with a spatially varying textured pattern. This light pattern illuminates the volume of space viewed by the camera such as any of the cameras discussed herein (e.g., the separate camera 303). The pattern has enough contrast to be seen by the camera over the ambient light, and has a high spatial frequency that gives the vision software detailed texture information.


A stereo camera 303, with two or more cameras 304, may also be contained in the vision system 301. The stereo camera 303 may simply pass raw camera images, in analog or digital format, to a separate computer (not shown) for vision processing. Alternately, the stereo camera 303 may contain specialized circuitry or an embedded computer capable of doing onboard vision processing.


Commercially available stereo cameras include for example, the Tyzx DeepSea™ and the Point Grey Bumblebee™. Such cameras may be monochrome or color, and may be sensitive to one or more specific bands of the electromagnetic spectrum including visible light, near-infrared, far infrared, and ultraviolet. Some cameras, like the Tyzx DeepSea,™ do much of their stereo processing within the camera enclosure using specialized circuitry and an embedded computer.


The illuminators 302 put out light that is invisible or close to invisible to a human user, and the camera 303 is sensitive to this light. This light may be in the near-infrared frequency. A front side 309 of the vision system 301 may contain a material that is transparent to light emitted by the illuminators. This material may also be opaque to visible light, obscuring the internal workings of the vision system 301 from a human user. Alternately, the front side 309 may consist of a fully opaque material that contains holes letting light out of the illuminator 302 and into the camera 303. The front side 309 may be part of the front border of the display 308. The vision box 301 may contain one or more opaque partitions 305 to prevent the illuminator's 302 light from bouncing around inside the box and into the camera 303. This ensures the camera 303 is able to capture a high quality, high contrast image. The overall form factor of the vision system 301 may be relatively flat in order to properly fit inside the display 308. This can be achieved by placing the illuminators 302 to the side of the stereo camera 303, and creating illuminators 302 that are relatively flat in shape.


The vision system 301 may have a connection that transfers camera data, whether raw or processed, analog or digital, to the computer 203 for processing. This data may be transferred wirelessly, on a separate cable from the power cable, or on a wire that is attached to the power cable. Thus, there may be only a single cable between the vision system 301 and the separate computer 203, with this single cable containing wires that provide both power and data. The illuminator 302 may contain monitoring circuits that would allow an external device to assess its current draw, temperature, number of hours of operation, or other data. The current draw may indicate whether part or all of the illuminator 302 has burnt out. This data may be communicated over a variety of interfaces including serial and USB.


The vision system 301 may contain a computer (not shown) that performs processing of the camera data. This processing may include, but is not limited to, stereo processing, generating depth from disparity, perspective transforms, person segmentation, body tracking, hand tracking, gesture recognition, touch detection, and face tracking. Data produced by the vision software may also be used to create interactive content that utilizes a vision interface. The content may include a representation of the user's body and/or hands, allowing the users to tell where they are relative to virtual objects in the interactive content. This content may be sent to the display 308 for display to a user.


The 3D vision system 301 may consist of other approaches, including but not limited to, laser rangefinders, time-of-flight cameras, and structured light accompanied by one or two cameras.


If the vision system 101 comprises a stereo vision system, 3D computer vision techniques using algorithms such as those based on the Marr-Poggio algorithm may take as input two or more images of the same scene taken from slightly different angles. These Marr-Poggio-based algorithms are examples of stereo algorithms. These algorithms may find texture patches from the different cameras' images that correspond to the same part of the same physical object. The disparity between the positions of the patches in the images allows the distance from the camera to that patch to be determined, thus providing 3D position data for that patch.


The performance of this algorithm degrades when dealing with objects of uniform color because uniform color makes it difficult to match up the corresponding patches in the different images. Thus, since the illuminator 302 creates light that is textured, shining the illuminator 302 onto the zone seen by the camera can improve the distance estimates of some 3D computer vision algorithms when processing the camera's data. By lighting objects in the interactive area with a pattern of light, the illuminator 302 improves the amount of texture data that may be used by the stereo algorithm to match patches.


Several methods may be used to remove inaccuracies and noise in the 3D data. For example, background methods may be used to mask out 3D data from areas of the camera's field of view that are known to have not moved for a particular period of time. These background methods (also known as background subtraction methods) may be adaptive, allowing the background methods to adjust to changes in the background over time. These background methods may use luminance, chrominance, and/or distance data from the cameras in order to form the background and determine foreground. Once the foreground is determined, 3D data gathered from outside the foreground region may be removed.


A color camera may be added to the vision system 301 to obtain chrominance data for the 3D data of the user and other objects in front of the screen. This chrominance data may be used to acquire a color 3D representation of the user, allowing their likeness to be recognized, tracked, and/or displayed on the screen. Noise filtering may be applied to either the depth image (which is the distance from the camera to each pixel of the camera's image from the camera's point of view), or directly to the 3D data. For example, smoothing and averaging techniques such as median filtering may be applied to the camera's depth image in order to reduce depth inaccuracies. As another example, isolated points or small clusters of points may be removed from the 3D data set if they do not correspond to a larger shape; thus eliminating noise while leaving users intact.


The 3D data may be analyzed in a variety of ways to produce high level information. For example, a user's fingertips, fingers, and hands may be detected. Methods for doing so include various shape recognition and object recognition algorithms. Objects may be segmented using any combination of 2D/3D spatial, temporal, chrominance, or luminance data. Furthermore, objects may be segmented under various linear or non-linear transformations of the aforementioned domains. Examples of object detection algorithms include, but are not limited to deformable template matching, Hough transforms, and the aggregation of spatially contiguous pixels/voxels in an appropriately transformed space.


As another example, the 3D points belonging to a user may be clustered and labeled such that the cluster of points belonging to the user is identified. Various body parts, such as the head and arms of a user may be segmented as markers. Points may also be also clustered in 3-space using unsupervised methods such as k-means, or hierarchical clustering. The identified clusters may then enter a feature extraction and classification engine. Feature extraction and classification routines are not limited to use on the 3D spatial data buy may also apply to any previous feature extraction or classification in any of the other data domains, for example 2D spatial, luminance, chrominance, or any transformation thereof.


A skeletal model may be mapped to the 3D points belonging to a given user via a variety of methods including but not limited to expectation maximization, gradient descent, particle filtering, and feature tracking. In addition, face recognition algorithms, such as eigenface or fisherface, may use data from the vision system, including but not limited to 2D/3D spatial, temporal, chrominance, and luminance data, in order to identify users and their facial expressions. Facial recognition algorithms used may be image based, or video based. This information may be used to identify users, especially in situations where they leave and return to the interactive area, as well as change interactions with displayed content based on their face, gender, identity, race, facial expression, or other characteristics.


Fingertips or other body parts may be tracked over time in order to recognize specific gestures, such as pushing, grabbing, dragging and dropping, poking, drawing shapes using a finger, pinching, and other such movements. The 3D vision system 101 may be specially configured to detect specific objects other than the user. This detection can take a variety of forms; for example, object recognition algorithms may recognize specific aspects of the appearance or shape of the object, RFID tags in the object may be read by a RFID reader (not shown) to provide identifying information, and/or a light source on the objects may blink in a specific pattern to provide identifying information.


Building the camera into the display may help to reduce the amount of calibration required for the 3D vision system 101. Since the relative position of the 3D vision system 101 to the display (e.g., the display 110) and the size of the display can both be known ahead of time, it is easy to determine the position of any object seen by the 3D vision system 101 relative to the images on the display. The data from the 3D vision system 101 can be perspective-transformed into a new coordinate space that determines the position of any detected objects relative to the display surface. This makes it possible, for example, to let a user point at a specific object on the screen using their arm, and have the direction of the arm directly point to the object they are selecting.



FIG. 4 illustrates an embodiment of the illuminator 302. Light from a lighting source 403 is re-aimed by a lens 402 so that the light is directed towards the center of a lens cluster 401. In one embodiment, the lens 402 is adjacent to the lighting source 403. In one embodiment, the lens 402 is adjacent to the lighting source 403 and has a focal length similar to the distance between the lens cluster 401 and the lighting source 403. This particular embodiment ensures that each emitter's light from the lighting source 403 is centered onto the lens cluster 401.


In one embodiment, the focal length of the lenses in the lens cluster 401 is similar to the distance between the lens cluster 401 and the lighting source 403. This focal length ensures that emitters from the lighting source 403 are nearly in focus when the illuminator 302 is pointed at a distant object. The position of components including the lens cluster 401, the lens 402, and/or the lighting source 403 may be adjustable to allow the pattern to be focused at a variety of distances. Optional mirrors 404 bounce light off of the inner walls of the illuminator 302 so that emitter light that hits the walls passes through the lens cluster 401 instead of being absorbed or scattered by the walls. The use of such mirrors allows low light loss in the desired “flat” configuration, where one axis of the illuminator is short relative to the other axes.


The lighting source 403 may include a cluster of individual emitters. The potential light sources for the emitters in the lighting source 403 vary widely; examples of the lighting source 403 include but are not limited to LEDs, laser diodes, incandescent bulbs, metal halide lamps, sodium vapor lamps, OLEDs, and pixels of an LCD screen. The emitter may also be a backlit slide or backlit pattern of holes. In one such embodiment, each emitter aims the light along a cone toward the lens cluster 401. The pattern of emitter positions can be randomized to varying degrees.


The density of emitters on the lighting source 403 may vary across a variety of spatial scales. This ensures that the emitter will create a pattern that varies in brightness even at distances where it is out of focus. The overall shape of the light source may be roughly rectangular. This helps ensure that with proper design of the lens cluster 401, the pattern created by the illuminator 302 covers a roughly rectangular area. This facilitates easy clustering of the illuminators 302 to cover broad areas without significant overlap.


The lighting source 403 may be on a motorized mount, allowing it to move or rotate. In one embodiment, the emitters in the pattern may be turned on or off via an electronic control system, allowing the pattern to vary. In this case, the emitter pattern may be regular, but the pattern of emitters that are on may be random. Many different frequencies of emitted light are possible. For example, near-infrared, far-infrared, visible, and ultraviolet light can all be created by different choices of emitters. The lighting source 403 may be strobed in conjunction with the camera(s) of the computer vision system allowing ambient light to be reduced.


The second optional component, a condenser lens or other hardware designed to redirect the light from each of the emitters in lighting source 403, may be implemented in a variety of ways. The purpose of this component, such as the lens 402 discussed herein, is to reduce wasted light by redirecting the emitters' light toward the center of the lens cluster 401, ensuring that as much of it goes through lens cluster 401 as possible.


In some embodiments, each emitter may be mounted such that it emits light in a cone perpendicular to the surface of the lighting source 403. If each emitter emits light in a cone, the center of the cone may be aimed at the center of the lens cluster 401 by using a lens 402 with a focal length similar to the distance between the lens cluster 401 and the lighting source 403.


The angle of the cone of light produced by the emitters may be chosen such that the cone will completely cover the surface of the lens cluster 401. If the lighting source 403 is designed to focus the light onto the lens cluster 401 on its own, for example by individually angling each emitter, then the lens 402 may not be useful. Implementations for the lens 402 include, but are not limited to, a convex lens, a plano-convex lens, a Fresnel lens, a set of microlenses, one or more prisms, and a prismatic film.


The third optical component, the lens cluster 401, is designed to take the light from each emitter and focus it onto a large number of points. Each lens in the lens cluster 401 may be used to focus each emitter's light onto a different point. Thus, the theoretical number of points that can be created by shining the lighting source 403 through the lens cluster 401 is equal to the number of emitters in the lighting source multiplied by the number of lenses in the lens cluster 401. For an exemplary lighting source with 200 LEDs and an exemplary emitter with 36 lenses, this means that up to 7200 distinct bright spots can be created. With the use of mirrors 404, the number of points created is even higher since the mirrors create “virtual” additional lenses in the lens cluster 401. This means that the illuminator 102 can easily create a high resolution texture that is useful to a computer vision system.


All the lenses in the lens cluster 401 may have a similar focal length. The similar focal length ensures that the pattern is focused together onto an object lit by the illuminator 102. The lenses 402 may alternatively have somewhat different focal lengths so at least some of the pattern is in focus at different distances.


The user(s) or other objects detected and processed by the system may be represented on the display in a variety of ways. This representation on the display may be useful in allowing one or more users to interact with virtual objects shown on the display by giving them a visual indication of their position relative to the virtual objects.


Forms that this representation may take include, but are not limited to: a digital shadow of the user(s) or other objects such as a two dimensional (2D) shape that represents a projection of the 3D data representing their body onto a flat surface; a digital outline of the user(s) or other objects, which can be thought of as the edges of the digital shadow; the shape of the user(s) or other objects in 3D, rendered in the virtual space, which may be colored, highlighted, rendered, or otherwise processed arbitrarily before display; images, icons, or 3D renderings representing the users' hands or other body parts, or other objects whereby the shape of the user(s) rendered in the virtual space, combined with markers on their hands are displayed when the hands are in a position to interact with on-screen objects (e.g., the markers on the hands may only show up when the hands are pointed at the screen; points that represent the user(s) (or other objects) from the point cloud of 3D data from the vision system, displayed as objects, which may be small and semitransparent.


Other forms of representation include cursors representing the position of users' fingers, which may be displayed or change appearance when the finger is capable of a specific type of interaction in the virtual space; objects that move along with and/or are attached to various parts of the users' bodies (e.g., a user may have a helmet that moves and rotates with the movement and rotation of the user's head); digital avatars that match the body position of the user(s) or other objects as they move whereby the digital avatars are mapped to a skeletal model of the users' positions; or any combination of the aforementioned representations.


In some embodiments, the representation may change appearance based on the users' allowed forms of interactions with on-screen objects. For example, a user may be shown as a gray shadow and not be able to interact with objects until they come within a certain distance of the display, at which point their shadow changes color and they can begin to interact with on-screen objects. In some embodiments, the representation may change appearance based on the users' allowed forms of interactions with on-screen objects. For example, a user may be shown as a gray shadow and not be able to interact with objects until they come within a certain distance of the display, at which point their shadow changes color and they can begin to interact with on-screen objects.


Given the large number of potential features that can be extracted from the 3D vision system 101 and the variety of virtual objects that can be displayed on the screen, there are a large number of potential interactions between the users and the virtual objects. Some examples of potential interactions include 2D force-based interactions and influence image based interactions that can be extended to 3D as well. Thus, 3D data about the position of a user could be used to generate a 3D influence image to affect the motion of a 3D object. These interactions, in both 2D and 3D, allow the strength and direction of the force the user imparts on virtual object to be computed, giving the user control over how they impact the object's motion.


Users may interact with objects by intersecting with them in virtual space. This intersection may be calculated in 3D, or the 3D data from the user may be projected down to 2D and calculated as a 2D intersection. Visual effects may be generated based on the 3D data from the user. For example, a glow, a warping, an emission of particles, a flame trail, or other visual effects may be generated using the 3D position data or some portion thereof. Visual effects may be based on the position of specific body parts. For example, a user could create virtual fireballs by bringing their hands together. Users may use specific gestures to pick up, drop, move, rotate, or otherwise modify virtual objects onscreen.


The virtual space depicted on the display may be shown as either 2D or 3D. In either case, the system merges information about the user with information about the digital objects and images in the virtual space. If the user is depicted two-dimensionally in the virtual space, then the 3D data about the user's position may be projected onto a 2D plane.


The mapping between the physical space in front of the display and the virtual space shown on the display can be arbitrarily defined and can even change over time. The actual scene seen by the users may vary based on the display chosen. In one embodiment, the virtual space (or just the user's representation) is two-dimensional. In this case, the depth component of the user's virtual representation may be ignored.


The mapping may be designed to act in a manner similar to a mirror, such that the motions of the user's representation in the virtual space as seen by the user are akin to a mirror image of the user's motions. The mapping may be calibrated such that when the user touches or brings a part of their body near to the screen, their virtual representation touches or brings the same part of their body near to the same part of the screen. In another embodiment, the mapping may show the user's representation appearing to recede from the surface of the screen as the user approaches the screen.


There are numerous potential uses for the presently disclosed interface. The potential uses include sports where users may box, play tennis (with a virtual racket), throw virtual balls, or engage in other sports activity with a computer or human opponent shown on the screen; navigation of virtual worlds where users may use natural body motions such as leaning to move around a virtual world, and use their hands to interact with objects in the virtual world; virtual characters where digital character on the screen may talk, play, and otherwise interact with people in front of the display as they pass by it where this digital character may be computer controlled or may be controlled by a human being at a remote location; advertising including interactive product demos and interactive brand experiences; multiuser workspaces where groups of users can move and manipulate data represented on the screen in a collaborative manner; video games where users can play games, controlling their onscreen characters via gestures and natural body movements; clothing where clothes are placed on the image of the user on the display, and allowing them to virtually try on clothes; control of a television without a remote where a user can use gestures to switch channels, alter the volume, turn the TV on or off, or make other changes; control of a digital video recorder, DVD player, or other media player without a remote where a user could use gestures to pause, fast forward, navigate a menu of content options, or make other changes; vision data may be used to control other devices outside the display where a computer may use a wireless network connection to communicate with external devices that control the lighting and temperature for the building.

Claims
  • 1. A system comprising: an illuminator configured to emit a predetermined pattern of light into an interactive space;a camera configured to detect at least a portion of the predetermined pattern of light within the interactive space; anda game device configured to generate interest data for an object located within the interactive space based on the detected at least a portion of the predetermined pattern of light, wherein the interest data indicates the object's level of interest in a game, wherein the game device is configured to adjust a difficulty level of the game in response to the object's level of interest.
  • 2. The system of claim 1, wherein the object's level of interest in the game is based on at least determined emotion of the object or a determined level of attention of the object to the game.
  • 3. The system of claim 1, wherein the object's level of interest in the game is based on one or more of a position of the object, a movement of the object, a posture of the object, or a facial expression of the object.
  • 4. The system of claim 1, wherein the game device is configured to determine whether the object is inattentive to the game and, in response to determining that the object is inattentive to the game, pause the game.
  • 5. The system of claim 1, wherein the game device is configured to increase a difficulty level of the game upon determining that the interest data indicates the object is frustrated.
  • 6. The system of claim 1, wherein the game device is further operative to determine one or more gestures of the object, wherein the determined one or more gestures comprises a control signal for control of a function of the game device.
  • 7. The system of claim 6, wherein the control signal comprises a signal to change one or more of a volume, a channel, an input, or a display setting.
  • 8. The system of claim 6, wherein the camera detecting a thumb of the object initiates a display power off control signal when the object's thumb is pointed down.
  • 9. The system of claim 6, wherein the camera detecting a thumb of the object initiates a display power on control signal when the thumb is pointed up.
  • 10. A method comprising: emitting a predetermined pattern of light into an interactive space;detecting at least a portion of the predetermined pattern of light within the interactive space;generating, by a computing system having one or more computer processors, interest data for an object located within the interactive space based on the detected at least a portion of the predetermined pattern of light, wherein the interest data indicates the object's level of interest in a game; andadjusting, by the computing system, a difficulty level of the game in response to the object's level of interest in the game.
  • 11. The method of claim 10, wherein the object's level of interest in the game is based on at least a determined emotion of the object or a determined level of attention of the object to the game.
  • 12. The method of claim 10, wherein the object's level of interest in the game is based on one or more of a position of the object, a movement of the object, a posture of the object, or a facial expression of the object.
  • 13. The method of claim 10, further comprising: determining whether the object is inattentive to the game and, in response to determining that the object is inattentive to the game, pausing the game.
  • 14. The method of claim 10, further comprising: increasing a difficulty level of the game upon determining that the interest data indicates the object is frustrated.
  • 15. A tangible computer readable medium having software modules including executable instructions stored thereon, wherein the software modules are configured for execution by a computing system having one or more hardware processors, the software modules including at least: an illumination module configured to initiate emission of a predetermined pattern of light into an interactive space;a detection module configured to analyze image data of the interactive space in order to detect at least a portion of the predetermined pattern of light within the interactive space; anda game module configured to generate interest data for an object located within the interactive space based on the detected at least a portion of the predetermined pattern of light, wherein the interest data indicates the object's level of interest in a game, andadjust a difficulty level of the game in response to the object's level of interest in the game.
  • 16. The tangible computer readable medium of claim 15, wherein the object's level of interest in the game is based on at least a determined emotion of the object or a determined level of attention of the object to the game.
  • 17. The tangible computer readable medium of claim 15, wherein the object's level of interest in the game is based on one or more of a position of the object, a movement of the object, a posture of the object, or a facial expression of the object.
  • 18. The tangible computer readable medium of claim 15, wherein the game module is further configured to determine whether the object is inattentive to the game and, in response to determining that the object is inattentive to the game, pause the game.
  • 19. The tangible computer readable medium of claim 15, wherein the game module is further configured to increase a difficulty level of the game upon determining that the interest data indicates the object is frustrated.
CROSS-REFERENCE TO RELATED APPLICATIONS

The present application claims the priority benefit of U.S. provisional patent application No. 61/034,828 filed Mar. 7, 2008 and entitled “Display with Built In 3d Sensing Capability and Gesture Control of TV,” the disclosure of which is incorporated herein by reference.

US Referenced Citations (226)
Number Name Date Kind
2917980 Grube et al. Dec 1959 A
3068754 Benjamin et al. Dec 1962 A
3763468 Ovshinsky et al. Oct 1973 A
4053208 Kato et al. Oct 1977 A
4275395 Dewey et al. Jun 1981 A
4573191 Kidode et al. Feb 1986 A
4725863 Dumbreck et al. Feb 1988 A
4791572 Green et al. Dec 1988 A
4843568 Krueger et al. Jun 1989 A
4887898 Halliburton et al. Dec 1989 A
4948371 Hall Aug 1990 A
5001558 Burley et al. Mar 1991 A
5138304 Bronson Aug 1992 A
5151718 Nelson Sep 1992 A
5239373 Tang et al. Aug 1993 A
5276609 Durlach Jan 1994 A
5319496 Jewell et al. Jun 1994 A
5325472 Horiuchi et al. Jun 1994 A
5325473 Monroe et al. Jun 1994 A
5426474 Rubtsov et al. Jun 1995 A
5436639 Arai et al. Jul 1995 A
5442252 Golz Aug 1995 A
5454043 Freeman Sep 1995 A
5497269 Gal Mar 1996 A
5510828 Lutterbach et al. Apr 1996 A
5526182 Jewell et al. Jun 1996 A
5528263 Platzker et al. Jun 1996 A
5528297 Seegert et al. Jun 1996 A
5534917 MacDougall Jul 1996 A
5548694 Gibson Aug 1996 A
5591972 Noble et al. Jan 1997 A
5594469 Freeman et al. Jan 1997 A
5633691 Vogeley et al. May 1997 A
5703637 Miyazaki et al. Dec 1997 A
5808784 Ando et al. Sep 1998 A
5846086 Bizzi et al. Dec 1998 A
5861881 Freeman et al. Jan 1999 A
5882204 Iannazo et al. Mar 1999 A
5923380 Yang et al. Jul 1999 A
5923475 Kurtz et al. Jul 1999 A
5953152 Hewlett Sep 1999 A
5969754 Zeman Oct 1999 A
5978136 Ogawa et al. Nov 1999 A
5982352 Pryor Nov 1999 A
6008800 Pryor Dec 1999 A
6058397 Barrus et al. May 2000 A
6075895 Qiao et al. Jun 2000 A
6084979 Kanade et al. Jul 2000 A
6088612 Blair Jul 2000 A
6097369 Wambach Aug 2000 A
6106119 Edwards Aug 2000 A
6118888 Chino et al. Sep 2000 A
6125198 Onda Sep 2000 A
6166744 Jaszlics et al. Dec 2000 A
6176782 Lyons et al. Jan 2001 B1
6191773 Maruno et al. Feb 2001 B1
6198487 Fortenbery et al. Mar 2001 B1
6198844 Nomura Mar 2001 B1
6217449 Kaku Apr 2001 B1
6263339 Hirsch Jul 2001 B1
6270403 Watanabe et al. Aug 2001 B1
6278418 Doi Aug 2001 B1
6292171 Fu et al. Sep 2001 B1
6304267 Sata Oct 2001 B1
6308565 French et al. Oct 2001 B1
6323895 Sata Nov 2001 B1
6333735 Anvekar Dec 2001 B1
6335977 Kage Jan 2002 B1
6339748 Hiramatsu Jan 2002 B1
6349301 Mitchell et al. Feb 2002 B1
6353428 Maggioni et al. Mar 2002 B1
6359612 Peter et al. Mar 2002 B1
6388657 Natoli May 2002 B1
6394896 Sugimoto May 2002 B2
6400374 Lanier Jun 2002 B2
6407870 Hurevich et al. Jun 2002 B1
6414672 Rekimoto et al. Jul 2002 B2
6445815 Sato Sep 2002 B1
6454419 Kitazawa Sep 2002 B2
6480267 Yanagi et al. Nov 2002 B2
6491396 Karasawa et al. Dec 2002 B2
6501515 Iwamura Dec 2002 B1
6522312 Ohshima et al. Feb 2003 B2
6545706 Edwards et al. Apr 2003 B1
6552760 Gotoh et al. Apr 2003 B1
6598978 Hasegawa Jul 2003 B2
6607275 Cimini et al. Aug 2003 B1
6611241 Firester et al. Aug 2003 B1
6654734 Mani et al. Nov 2003 B1
6658150 Tsuji et al. Dec 2003 B2
6661918 Gordon et al. Dec 2003 B1
6677969 Hongo Jan 2004 B1
6707054 Ray Mar 2004 B2
6707444 Hendriks et al. Mar 2004 B1
6712476 Ito et al. Mar 2004 B1
6720949 Pryor et al. Apr 2004 B1
6732929 Good et al. May 2004 B2
6747666 Utterback et al. Jun 2004 B2
6752720 Clapper et al. Jun 2004 B1
6754370 Hall-Holt et al. Jun 2004 B1
6791700 Omura et al. Sep 2004 B2
6826727 Mohr et al. Nov 2004 B1
6831664 Marmaropoulos et al. Dec 2004 B2
6871982 Holman et al. Mar 2005 B2
6877882 Haven et al. Apr 2005 B1
6912313 Li Jun 2005 B2
6965693 Kondo et al. Nov 2005 B1
6975360 Slatter Dec 2005 B2
6999600 Venetianer Feb 2006 B2
7015894 Morohoshi Mar 2006 B2
7042440 Pryor et al. May 2006 B2
7054068 Yoshida et al. May 2006 B2
7058204 Hildreth et al. Jun 2006 B2
7068274 Welch et al. Jun 2006 B2
7069516 Rekimoto Jun 2006 B2
7084859 Pryor et al. Aug 2006 B1
7088508 Ebina et al. Aug 2006 B2
7149262 Nayar et al. Dec 2006 B1
7158676 Rainsford Jan 2007 B1
7170492 Bell Jan 2007 B2
7190832 Frost et al. Mar 2007 B2
7193608 Stuerzlinger Mar 2007 B2
7227526 Hildreth et al. Jun 2007 B2
7259747 Bell Aug 2007 B2
7262874 Suzuki Aug 2007 B2
7268950 Poulsen Sep 2007 B2
7289130 Satoh et al. Oct 2007 B1
7330584 Weiguo et al. Feb 2008 B2
7339521 Scheidemann et al. Mar 2008 B2
7348963 Bell Mar 2008 B2
7379563 Shamaie May 2008 B2
7382897 Brown et al. Jun 2008 B2
7394459 Bathiche et al. Jul 2008 B2
7428542 Fink et al. Sep 2008 B1
7432917 Wilson et al. Oct 2008 B2
7536032 Bell May 2009 B2
7559841 Hashimoto Jul 2009 B2
7576727 Bell Aug 2009 B2
7598942 Underkoffler et al. Oct 2009 B2
7619824 Poulsen Nov 2009 B2
7665041 Wilson et al. Feb 2010 B2
7710391 Bell et al. May 2010 B2
7737636 Li et al. Jun 2010 B2
RE41685 Feldman et al. Sep 2010 E
7809167 Bell Oct 2010 B2
7834846 Bell Nov 2010 B1
20010012001 Rekimoto et al. Aug 2001 A1
20010033675 Maurer et al. Oct 2001 A1
20020006583 Michiels et al. Jan 2002 A1
20020032697 French et al. Mar 2002 A1
20020041327 Hildreth et al. Apr 2002 A1
20020064382 Hildreth et al. May 2002 A1
20020081032 Chen et al. Jun 2002 A1
20020103617 Uchiyama et al. Aug 2002 A1
20020105623 Pinhanez Aug 2002 A1
20020130839 Wallace et al. Sep 2002 A1
20020140633 Rafii et al. Oct 2002 A1
20020140682 Brown et al. Oct 2002 A1
20020178440 Agnihorti et al. Nov 2002 A1
20020186221 Bell Dec 2002 A1
20030032484 Ohshima et al. Feb 2003 A1
20030076293 Mattsson Apr 2003 A1
20030091724 Mizoguchi May 2003 A1
20030093784 Dimitrova et al. May 2003 A1
20030098819 Sukthankar et al. May 2003 A1
20030103030 Wu Jun 2003 A1
20030113018 Nefian et al. Jun 2003 A1
20030122839 Matraszek et al. Jul 2003 A1
20030137494 Tulbert Jul 2003 A1
20030161502 Morihara et al. Aug 2003 A1
20030178549 Ray Sep 2003 A1
20040005924 Watabe et al. Jan 2004 A1
20040015783 Lennon et al. Jan 2004 A1
20040046736 Pryor et al. Mar 2004 A1
20040046744 Rafii et al. Mar 2004 A1
20040073541 Lindblad et al. Apr 2004 A1
20040091110 Barkans May 2004 A1
20040095768 Watanabe et al. May 2004 A1
20040183775 Bell Sep 2004 A1
20050028188 Latona et al. Feb 2005 A1
20050039206 Opdycke Feb 2005 A1
20050086695 Keele et al. Apr 2005 A1
20050088407 Bell Apr 2005 A1
20050089194 Bell Apr 2005 A1
20050104506 Youh et al. May 2005 A1
20050110964 Bell May 2005 A1
20050122308 Bell et al. Jun 2005 A1
20050132266 Ambrosino et al. Jun 2005 A1
20050147282 Fujii Jul 2005 A1
20050162381 Bell et al. Jul 2005 A1
20050185828 Semba et al. Aug 2005 A1
20050195598 Dancs et al. Sep 2005 A1
20050265587 Schneider Dec 2005 A1
20060010400 Dehlin et al. Jan 2006 A1
20060031786 Hillis et al. Feb 2006 A1
20060132432 Bell Jun 2006 A1
20060139314 Bell Jun 2006 A1
20060168515 Dorsett, Jr. et al. Jul 2006 A1
20060184993 Goldthwaite et al. Aug 2006 A1
20060187545 Doi Aug 2006 A1
20060227099 Han et al. Oct 2006 A1
20060242145 Krishnamurthy et al. Oct 2006 A1
20060256382 Matraszek et al. Nov 2006 A1
20060258397 Kaplan et al. Nov 2006 A1
20060294247 Hinckley et al. Dec 2006 A1
20070002039 Pendleton et al. Jan 2007 A1
20070019066 Cutler Jan 2007 A1
20070285419 Givon Dec 2007 A1
20080040692 Sunday et al. Feb 2008 A1
20080062123 Bell Mar 2008 A1
20080062257 Corson Mar 2008 A1
20080090484 Lee et al. Apr 2008 A1
20080150890 Bell et al. Jun 2008 A1
20080150913 Bell et al. Jun 2008 A1
20080170776 Albertson et al. Jul 2008 A1
20080245952 Troxell et al. Oct 2008 A1
20080252596 Bell et al. Oct 2008 A1
20090027337 Hildreth Jan 2009 A1
20090077504 Bell et al. Mar 2009 A1
20090102788 Nishida et al. Apr 2009 A1
20090225196 Bell et al. Sep 2009 A1
20090235295 Bell et al. Sep 2009 A1
20090251685 Bell Oct 2009 A1
20100026624 Bell et al. Feb 2010 A1
20100039500 Bell et al. Feb 2010 A1
20100121866 Bell et al. May 2010 A1
Foreign Referenced Citations (28)
Number Date Country
0 055 366 Jul 1982 EP
0 626 636 Nov 1994 EP
0 913 790 May 1999 EP
1 689 172 Jun 2002 EP
57-094672 Jun 1982 JP
2000-105583 Apr 2000 JP
2002-014997 Jan 2002 JP
2002-092023 Mar 2002 JP
2002-171507 Jun 2002 JP
2003-517642 May 2003 JP
2003-271084 Sep 2003 JP
2003-0058894 Jul 2003 KR
WO 9838533 Sep 1998 WO
WO 0016562 Mar 2000 WO
WO 0163916 Aug 2001 WO
WO 0201537 Jan 2002 WO
WO 02100094 Dec 2002 WO
WO 2004055776 Jul 2004 WO
WO 2004097741 Nov 2004 WO
WO 2005041578 May 2005 WO
WO 2005041579 May 2005 WO
WO 2005057398 Jun 2005 WO
WO 2005057399 Jun 2005 WO
WO 2005057921 Jun 2005 WO
WO 2005091651 Sep 2005 WO
WO 2007019443 Feb 2007 WO
WO 2008124820 Oct 2008 WO
WO 2009035705 Mar 2009 WO
Related Publications (1)
Number Date Country
20100060722 A1 Mar 2010 US
Provisional Applications (1)
Number Date Country
61034828 Mar 2008 US