Providing efficient and intuitive interaction between a computer system and users thereof is essential for delivering an engaging and enjoyable user-experience. Today, most computer systems include a keyboard for allowing a user to manually input information into the computer system, and a mouse for selecting or highlighting items shown on an associated display unit. As computer systems have grown in popularity, however, alternate input and interaction systems have been developed. For example, touch-based, or touchscreen, computer systems allow a user to physically touch the display unit and have that touch registered as an input at the particular touch location, thereby enabling a user to interact physically with objects shown on the display. Due to certain limitations of conventional optical systems, however, a user's input or selection may be not be correctly or accurately registered by the computing system.
The features and advantages of the inventions as well as additional features and advantages thereof will be more clearly understood hereinafter as a result of a detailed description of particular embodiments of the invention when taken in conjunction with the following drawings in which:
Certain terms are used throughout the following description and claims to refer to particular system components. As one skilled in the art will appreciate, companies may refer to a component by different names. This document does not intend to distinguish between components that differ in name but not function. In the following discussion and in the claims, the terms “including” and “comprising” and “e.g.” are used in an open-ended fashion, and thus should be interpreted to mean “including, but not limited to . . . ”. The term “couple” or “couples” is intended to mean either an indirect or direct connection. Thus, if a first component couples to a second component, that connection may be through a direct electrical connection, or through an indirect electrical connection via other components and connections, such as an optical electrical connection or wireless electrical connection. Furthermore, the term “system” refers to a collection of two or more hardware and/or software components, and may be used to refer to an electronic device or devices, or a sub-system thereof.
The following discussion is directed to various embodiments. Although one or more of these embodiments may be preferred, the embodiments disclosed should not be interpreted, or otherwise used, as limiting the scope of the disclosure, including the claims. In addition, one skilled in the art will understand that the following description has broad application, and the discussion of any embodiment is meant only to be exemplary of that embodiment, and not intended to intimate that the scope of the disclosure, including the claims, is limited to that embodiment.
Conventional touchscreen and optical solutions are limited by certain occlusion issues. Occlusion occurs when an object touching the screen is blocked (or occluded) from view by another object. In other words, by nature, an optical touch screen solution must be able to see the object touching the screen to accurately register a touch from a user. Most two camera systems are configured to only detect two touches and are also limited in the cases in which they can reject a palm touching the screen (i.e. palm rejection capability). Such few factors limit the effectiveness of touchscreen computing environments utilizing conventional optical solutions.
Embodiments of the present invention disclose a multi-camera system for an electronic display device. According to one embodiment, the multi-camera system includes at least three three-dimensional cameras arranged around the perimeter of the display panel of the computing device. In one embodiment, the multi-camera system includes at least three optical sensors each configured to capture measurement data of an object from a different perspective with respect to the display panel.
Furthermore, a multi-camera system in accordance with embodiments of the present invention has a number of advantages over more traditional camera systems. For example, the solution proposed by embodiments of the present invention provide improved multi-touch performance, improved palm rejection capabilities, improved three-dimensional object mapping, and improved cost effectiveness. According to one embodiment, the multi-camera system will be able to detect a minimum number of touches equal to the number of optical sensors and without any occlusion issues. As the number of optical sensors increases, it becomes even harder to occlude the desired touch implemented with the palm. Furthermore, as the camera system also has the ability to detect three-dimensional objects in the space in front of the display unit, more optical sensors will allow the system to generate a much more detailed three-dimensional model of the object. The lack of occlusion also allows for added accuracy for fewer touch points and the potential for many more than two touches in many scenarios.
Moreover, due to the numerous viewpoints and perspectives of the multi-camera system of the present embodiments, palm rejection capability is greatly improved. In particular, the palm area of a user can land on the display screen in far fewer locations that would occlude the desired intended touch of the user. Still further, another advantage of providing at least three three-dimensional optical sensors over other touch screen technologies is the ability of each optical camera to scale data extremely inexpensively.
Referring now in more detail to the drawings in which like numerals identify corresponding parts throughout the views,
The display system 100 includes a display panel 109 and a transparent layer 107 in front of the display panel 109. The front side of the display panel 109 is the surface that displays an image and the back of the panel 109 is opposite the front. The three dimensional optical sensors 110a-110c can be on the same side of the transparent layer 107 as the display panel 109 to protect the three dimensional optical sensors from contaminates. In an alternative embodiment, the three dimensional optical sensors 110a-110c may be in front of the transparent layer 105. The transparent layer 105 can be glass, plastic, or another transparent material. The display panel 109 may be a liquid crystal display (LCD) panel, a plasma display, a cathode ray tube (CRT), an OLED or a projection display such as digital light processing (DLP), for example. In one embodiment, mounting the three dimensional optical sensors 110a-110c in an area of the display system 100 that is outside of the perimeter of the of the display panel 109 provides that the clarity of the transparent layer is not reduced by the three dimensional optical sensors.
Three-dimensional optical sensors 110a, 110b and 110c are configured to report a three-dimensional depth map to a processor. The depth map changes over time as an object 130 moves in the respective field of view 115a of optical sensor 110a, the field of view 115b of optical sensor 115b, and the field of view 215b of optical sensor 210b. The three-dimensional optical sensors 110a-110c can determine the depth of an object located within its respective field of view 115a-115c. The depth of the object 130 can be used in one embodiment to determine if the object is in contact with the front side of the display panel 109. According to one embodiment, the depth of the object can be used to determine if the object is within a programmed distance of the display panel but not actually contacting the front side of the display panel. For example, the object 130 may be a user's hand and finger approaching the front side of the display panel 109. In one embodiment, optical sensors 110a and 110c are positioned at top most corners around the perimeter of the display panel 109 such that each field of view 115a - 115c includes the areas above and surrounding the display panel 109. As such, an object such as a user's hand for example, may be detected and any associated motions around the perimeter and in front of the computer system 100 can be accurately interpreted by the processor.
Furthermore, inclusion of three optical sensors 110a-110c allows distances and depth to be measured from the viewpoint/perspective of each sensor (i.e. different field of views and perspectives), thus creating a stereoscopic view of the three-dimensional scene and allowing the system to accurately detect the presence and movement of objects or hand poses. For example, and as shown in the embodiment of
Conventional two-dimensional sensors that use a triangulation based methods may involve intensive image processing to approximate the depth of objects. Generally, two-dimensional image processing uses data from a sensor and processes the data to generate data that is normally not available from a two-dimensional sensor. Color and intensive image processing may not be used for a three-dimensional sensor because the data from the three-dimensional sensor includes depth data. For example, the image processing for a time of flight using a three-dimensional optical sensor may involve a simple table-lookup to map the sensor reading to the distance of an object from the display. The time of flight sensor determines the depth from the sensor of an object from the time that it takes for light to travel from a known source, reflect from an object and return to the three-dimensional optical sensor.
In an alternative embodiment, the light source can emit structured light that is the projection of a light pattern such as a plane, grid, or more complex shape at a known angle onto an object. The way that the light pattern deforms when striking surfaces allows vision systems to calculate the depth and surface information of the objects in the scene. Integral Imaging is a technique which provides a full parallax stereoscopic view. To record the information of an object, a micro lens array in conjunction with a high resolution optical sensor is used. Due to a different position of each micro lens with respect to the imaged object, multiple perspectives of the object can be imaged onto an optical sensor. The recorded image that contains elemental images from each micro lens can be electronically transferred and then reconstructed in image processing. In some embodiments the integral imaging lenses can have different focal lengths and the objects depth is determined based on if the object is in focus, a focus sensor, or out of focus, a defocus sensor. However, embodiments of the present invention are not limited to any particular type of three-dimensional optical sensor.
Furthermore, and as shown in the exemplary embodiment of
As described above with reference to the embodiment depicted in
The multi-camera three-dimensional touchscreen environment described in the embodiments of the present invention has the advantage of being able to resolve three-dimensional objects in more detail. For example, more pixels are used to image the object and the object is imaged from more angles, resulting in a more complete representation of the object. The multiple camera system can also be used in a three-dimensional touch screen environment to image different volumes in front of the display panel. Accordingly, occlusion and palm rejection problems are drastically reduced, allowing a user's touch input to be correctly and accurately registered by the computer display system.
Furthermore, while the invention has been described with respect to exemplary embodiments, one skilled in the art will recognize that numerous modifications are possible. For example, although exemplary embodiments depict an all-in-one computer as the representative computer display system, the invention is not limited thereto. For example, the multi-camera system of the present embodiments may be implemented in a netbook, a tablet personal computer, a cell phone, or any other electronic device having a display panel.
Furthermore, the three-dimensional object may be any device, body part, or item capable of being recognized by the three-dimensional optical sensors of embodiments of the present embodiments. For example, a stylus, ball-point pen, or small paint brush may be used as a representative three-dimensional object by a user for simulating painting motions to be interpreted by a computer system running a painting application. That is, the multi-camera system and optical sensor arrangement thereof, is configured to detect and recognize any three-dimensional object within the field of view of a particular optical sensor.
In the foregoing description, numerous details are set forth to provide an understanding of the present invention. However, it will be understood by those skilled in the art that the present invention may be practiced without these details. Thus, although the invention has been described with respect to exemplary embodiments, it will be appreciated that the invention is intended to cover all modifications and equivalents within the scope of the following claims.