Touch screen systems are available that use two or more camera assemblies that are located in different corners of the touch screen. Each of the camera assemblies includes one linear light sensor and simple optics such as a lens that detects light within a single field of view. One or more infrared light sources may be mounted in proximity to the lens or proximate other areas of the touch screen.
A touch screen system that uses one such camera assembly mounted in one corner of the touch screen and a second such camera assembly mounted in an adjacent corner of the touch screen provides reliable detection of a single touch on the touch screen using triangulation. The detection of the finger or stylus on the touch screen is made by detecting infrared light reflected by the stylus or finger, or by detecting a shadow of the stylus or finger due to the relative lack of light reflected from the bezel of the touch screen. However, some blind spots may occur near each of the camera assemblies where a location of a touch may not be determined.
Touch screen systems capable of detecting two or more simultaneous touches are desirable to increase the functionality for the user. Additional camera assemblies with linear image sensors located in other corners of the touch screen are needed to eliminate the aforementioned blind spots as well as to detect two or more simultaneous touches. Precise mechanical positioning of the multiple separate camera assemblies is needed, adding to the complexity of the system.
In accordance with an embodiment, a touch system includes a touch sensing plane and a camera assembly that is positioned proximate the touch sensing plane. The camera assembly includes an image sensor and at least one virtual camera that has at least two fields of view associated with the touch sensing plane. The at least one virtual camera includes optical components that direct light that is proximate the touch sensing plane along at least one light path. The optical components direct and focus the light onto different areas of the image sensor.
In accordance with an embodiment, a touch system includes a touch sensing plane and a camera assembly positioned proximate the touch sensing plane. The camera assembly includes an image sensor to detect light levels associated with light within the touch sensing plane. The light levels are configured to be used in determining coordinate locations in at least two dimensions of one touch or simultaneous touches within the touch sensing plane.
In accordance with an embodiment, a camera assembly for detecting one touch or simultaneous touches includes an image sensor and optical components that direct light associated with at least two fields of view along at least one light path. The optical components direct and focus the light that is associated with one of the fields of view onto one area of the image sensor and direct and focus the light that is associated with another one of the fields of view onto a different area of the image sensor. Light levels associated with the light are configured to be used in determining coordinate locations of one touch or simultaneous touches within at least one of the at least two fields of view.
The foregoing summary, as well as the following detailed description of certain embodiments of the present invention, will be better understood when read in conjunction with the appended drawings. To the extent that the figures illustrate diagrams of the functional blocks of various embodiments, the functional blocks are not necessarily indicative of the division between hardware circuitry. Thus, for example, one or more of the functional blocks (e.g., processors or memories) may be implemented in a single piece of hardware (e.g., a general purpose signal processor or random access memory, hard disk, or the like). Similarly, the programs may be stand alone programs, may be incorporated as subroutines in an operating system, may be functions in an installed software package, and the like. It should be understood that the various embodiments are not limited to the arrangements and instrumentality shown in the drawings.
Referring to both
The camera assembly 104 includes an image sensor 130 and at least one virtual camera. A virtual camera may also be referred to as an effective camera. In one embodiment, the image sensor 130 may be a two-dimensional (2D) image sensor that may be a sensor type that is used in a digital camera. In another embodiment, the image sensor 130 may be a linear sensor. In some embodiments, the linear sensor may have a length such that different areas may be used to detect light levels associated with different fields of view, as discussed further below. In the embodiment of
In one embodiment, directing the light may include one or more of focusing, reflecting and refracting optical components. For example, the virtual camera 132 has optical components 160, 162, 164 and 166. The light proximate the touch surface 102 is directed by at least one optical component, such as the component 160, and directed by the optical components, such as the components 162, 164 and 166, along a light path that extends to the image sensor 130. The light is then directed to and focused onto the predetermined area of the image sensor 130. Therefore, each virtual camera 132-138 has optical components that direct the light from predetermined fields of view of the touch surface 102 along a light path associated with the virtual camera. The light from each light path is directed and focused onto a different predetermined area of the image sensor 130. In one embodiment, the alignment of the directed and focused light with respect to the area of the image sensor 130 may be accomplished through software in conjunction with, or rather than, mechanical alignment of structural components.
The camera assembly 104 may in some embodiments include a light source 146 that illuminates the touch sensing plane 170 with a sheet of light. The touch sensing plane 170 may be substantially parallel to the touch surface 102. The light source 146 may be an infrared light source, although other frequencies of light may be used. Therefore, the light source 146 may be a visible light source. In another embodiment, the light source 146 may be a laser diode such as a vertical-cavity surface emitting laser (VCSEL), which may provide a more refined fan beam compared to an alternative infrared light source. The light source 146 may provide constant illumination when the system 100 is active, or may provide pulses of light at common intervals. The light source 146 may illuminate the entirety or a portion of the touch sensing plane 170. In another embodiment, a second light source 156 may be mounted proximate a different corner or along a side of the touch surface 102 or touch sensing plane 170. Therefore, in some embodiments more than one light source may be used, and in other embodiments, the light source may be located away from the camera assembly 104.
In some embodiments, a reflector 148 is mounted proximate to the sides 140, 142, 152 and 154 of the touch surface 102. The reflector 148 may be formed of a retroreflective material or other reflective material, and may reflect the light from the light source 146 towards the camera assembly 104. The reflector 148 may be mounted on or integral with an inside edge of a bezel 150 or frame around the touch surface 102. For example, the reflector 148 may be a tape, paint or other coating substance that is applied to one or more surfaces of the bezel 150. In one embodiment, the reflector 148 may extend fully around all sides of the touch surface 102. In another embodiment, the reflector 148 may extend fully along some sides, such as along the sides 152 and 154 which are opposite the camera assembly 104 and partially along the sides 140 and 142, such as to not extend in the immediate vicinity of the camera assembly 104.
A processor module 110 may receive the signals sent to the touch screen controller 108 over the cable 106. Although shown separately, the touch screen controller 108 and the image sensor 130 may be within the same unit. A triangulation module 112 may process the signals to determine if the signals indicate no touch, one touch, or two or more simultaneous touches on the touch surface 102. For example, the level of light may be at a baseline profile when no touch is present. The system 100 may periodically update the baseline profile based on ambient light, such as to take into account changes in sunlight and room lighting. In one embodiment, if one or more touch is present, a decrease in light on at least one area of the sensor 130 may be detected. In another embodiment, the presence of one or more touch may be indicated by an increase in light on at least one area of the sensor 130. In one embodiment, the triangulation module 112 may also identify the associated coordinates of any detected touch. In some embodiments, the processor module 110 may also access a look-up table 116 or other storage format that may be stored in the memory 114. The look-up table 116 may be used to store coordinate information that is used to identify the locations of one or more touches. For example, (X, Y) coordinates may be identified. In another embodiment, (X, Y, Z) coordinates may be identified, wherein the Z axis provides an indication of how close an object, such as a finger or stylus, is to the touch surface 102 or where the object is within the depth of the touch sensing plane 170. Information with respect to how fast the object is moving may also be determined. The triangulation module 112 may thus identify one or more touches that are within a predetermined distance of the touch surface 102. Therefore, touches may be detected when in contact with the touch surface 102 and/or when immediately proximate to, but not in contact with, the touch surface 102. In some embodiments, the processing of signals to identify presence and coordinates of one or more touches may be accomplished in hardware, software and/or firmware that is not within the touch screen controller 108. For example, the processor module 110 and/or triangulation module 112 and/or processing functionality thereof may be within a host computer 126 or other computer or processor, or within the camera assembly 104.
As used herein, “simultaneous touches” refers to two or more touches that are present within the touch sensing plane 170 and/or in contact with the touch surface during a same time duration but are not necessarily synchronized. Therefore, one touch may have a duration that starts before the beginning of the duration of another touch, such as a second touch, and at least portions of the durations of the first and second touches overlap each other in time. For example, two or more simultaneous touches occur when objects such as a finger or stylus makes contact with the touch surface 102 in two or more distinct locations, such as at two or more of the locations 118, 120 and 122, over a same time duration. Similarly, two or more simultaneous touches may occur when objects are within a predetermined distance of, but not in contact with, the touch surface 102 in two or more distinct locations over a same time duration. In some embodiments, one touch may be in contact with the touch surface 102 while another simultaneous touch is proximate to, but not in contact with, the touch surface 102.
When one or more touches are identified, the processor module 110 may then pass the (X, Y) coordinates (or (X, Y, Z) coordinates) to a display module 124 that may be stored within one or more modules of firmware or software. The display module 124 may be a graphical user interface (GUI) module. In one embodiment, the display module 124 is run on a host computer 126 that also runs an application code of interest to the user. The display module 124 determines whether the coordinates indicate a selection of a button or icon displayed on the touch surface 102. If a button is selected, the host computer 126 or other component(s) (not shown) may take further action based on the functionality associated with the particular button. The display module 124 may also determine whether one or more touch is associated with a gesture, such as zoom or rotate. The one or more touch may also be used to replace mouse and/or other cursor input.
The directed light is focused and/or directed on an area, such as area 218, 220, 222, or 224 of a sensor surface 216 of the image sensor 130. In one embodiment, the image sensor 130 may be a 2D image sensor and the sensor surface 216 may have a plurality of sensing lines that sense levels of light as shown in
In another embodiment, if the image sensor 130 is a linear sensor, the sensor surface 216 may have a single sensing line that extends along a length of the linear sensor, as shown below in
Referring to the virtual camera 136 shown in
The FOV 300 overlaps at least portions of the fields of view 302, 304 and 306. In one embodiment, a FOV of a virtual camera may entirely overlap a FOV of another virtual camera. In another embodiment, a FOV of a first virtual camera may overlap some of the fields of view of other virtual cameras while not overlapping any portion of another FOV of a second virtual camera. In yet another embodiment, the FOVs of at least some of the virtual cameras may be adjacent with respect to each other.
In the embodiment shown in
The two optical surfaces 308 and 310 of virtual camera 132 direct the light that is proximate the touch surface 102 and/or within the touch sensing plane 170. The optical surface 308 is associated with one light path 320 and the optical surface 310 is associated with another light path 322. The light paths 320 and 322 may be formed, however, by using the same set of optical components within the virtual camera 132, such as the optical components 200, 208, 210 and 212 shown in
Although each of the virtual cameras 132-138 are shown as having two light paths in
One or more small dead zones, such as dead zones 316 and 318 may occur immediately proximate the camera assembly 104 on outer edges of the touch surface 102. In some embodiments, the bezel 150 (as shown in
In
Referring to both
Turning to the virtual camera 134, two optical components 324 and 326 direct light associated with the FOV 302. The light paths associated with the two optical components 324 and 326 may be directed and focused onto one set of sensing lines. For example, the directed light associated with the optical components 324 and 326 may be directed and focused onto an area including sensing lines 360, 361, 362, 363, 364 and 365. Again the set of sensing lines 360-365 may be separate from other sets of sensing lines.
Similarly, the virtual camera 136 may have two optical components 328 and 330 that direct light associated with the FOV 304. The directed light may be directed and focused onto the neighboring sensing lines 370, 371, 372, 373, 374 and 375. The virtual camera 138 may have two optical components 332 and 334 that direct light associated with the FOV 306. The directed light from the optical component 332 may be directed and focused onto the neighboring sensing lines 380, 381, 382, 383, 384 and 385, while the directed light from the optical component 334 may be directed and focused onto the neighboring sensing lines 390, 391, 392, 393, 394 and 395.
The optical components or optical surfaces of one virtual camera, such as virtual camera 134, may be displaced with respect to the optical components or surfaces of the other virtual cameras 132, 136 and 138 to provide binocular vision. In contrast, optical components or optical surfaces that are positioned close to one another, such as the optical surfaces 308 and 310, may be considered to be within the same virtual camera because the optical surfaces increase the effective angular FOV of the same virtual camera.
It should be understood that other sensor configurations may be used. Therefore, different sensing lines and pixel arrangements may be used while still providing the ability to focus light associated with different fields of view on different areas of the image sensor.
Structure 402 and 404 may be provided having one or more through holes 406, 408 and 410 for connecting the camera assembly 104 to other structure associated with the touch surface 102. The structure 402 and 404 may extend below the optical components. Other structural and attachment configurations are contemplated.
Optical surfaces 418 and 419 are associated with the virtual camera 132, optical surfaces 420 and 421 are associated with the virtual camera 134, optical surfaces 422 and 423 are associated with the virtual camera 136, and optical surfaces 424 and 425 are associated with the virtual camera 138. By way of example only, each of the optical surfaces 418 and 419 may be associated with a different optical component or may be formed integral with a single optical component. In one embodiment, one or more of the optical components associated with the virtual cameras 132, 134, 136 and 138 may have more than one optical surface.
As discussed above, some surfaces may be formed of an optically black or light occluding material, or may be covered with a light occluding material. For example, referring to the virtual camera 138 and the optical surfaces 424 and 425, surfaces 430, 432, 434, 436 and 438 (the surface closest to and substantially parallel with the touch surface 102 and/or the touch sensing plane 170), may be covered or coated with a light occluding material. Similarly, the outside surfaces of the material forming the optical components that direct the light paths to the image sensor 130 may be covered with a light occluding material. Surfaces that do not result in light interference may not be covered with a light occluding material.
Referring to
A dip may be indicated in the graph 600 when a touch is present. More than one dip 608 and 610 is indicated when more than one touch is present within the associated FOV. This may occur because the finger, stylus or other selecting item may block the return of reflected light to the virtual camera. In other embodiments wherein an increase in detected light is used to detect a touch, an upward protrusion above the baseline profile 606 in the graph 600 occurs rather than a dip. Therefore, the detection of one or more touch may be determined based on an increase in detected light. This may occur in touch systems that do not use the reflector 148 shown in the system of
A portion of the pixels in the image sensor 130 may individually or in sets be associated with an angle with respect to the optical component and/or optical surface(s) of the optical component of the particular virtual camera. For the detection of a single touch, triangulation may be accomplished by drawing lines from the optical surfaces at the specified angles, indicating the location of the touch where the lines cross. More rigorous detection algorithms may be used to detect two or more simultaneous touches. In some embodiments, the look-up table 116 may be used alone or in addition to other algorithms to identify the touch locations.
In some embodiments, a centroid of the touch may be determined. For example, the use of the reflector 148 may improve the centroid determination as the reflector 148 creates an intense return from the light source 146, creating a bright video background within which the touch appears as a well defined shadow. In other words, a strong positive return signal is detected when a touch is not present and a reduction in the return signal is detected when a touch is present.
In some embodiments, the pointer that is used to select a touch location may contribute a positive signal that is somewhat variable depending on pointer color, reflectivity, texture, shape and the like, and may be more difficult to define in terms of its associated centroid. In a touch system having a light source 146 and reflector 148, the pointer blocks the strong positive return signal from the reflector 148. The drop in the return signal may be very large in contrast to the positive signal from the pointer, rendering the reflective effect of the pointer as a net reduction in signal which may not negatively impact the ability of the system 100 to detect the coordinates of the touch.
The additional camera assembly 702 may be used for more robust touch detection and/or to identify an increasing number of simultaneous touches. For example, a single camera assembly may not be able to detect two simultaneous touches when the touches are close to each other and far away from the camera assembly, or when the camera assembly and the two touches are substantially in line with respect to each other. Referring to
The additional camera assembly 702 may also be used if the touch surface 102 and/or touch sensing plane 170 are relatively large and/or more than one user may interact with the touch surface 102 at the same time. The information detected by the camera assemblies 104 and 702 may be combined and used together to identify locations of touches, or may be used separately to identify locations of touches. The fields of view of the virtual cameras within the camera assembly 702 may at least partially overlap at least some of the fields of view discussed in
Referring to the camera assembly 806, one or both of virtual cameras 834 and 836 may have a FOV that is larger than the FOV associated with the virtual cameras of the camera assemblies 802 and 804. For example, each of virtual cameras 834 and 836 may have a FOV of up to 180 degrees. As discussed previously, the virtual cameras of the camera assembly mounted proximate a corner of the display screen, such as shown in
Increasing the number of camera assemblies located in different areas with respect to the touch screen 810 may allow a greater number of simultaneous touches to be detected. As shown there are five simultaneous touches at locations 816, 818, 820, 822 and 824. With respect to the camera assembly 802, the touch at location 816 may at least partially obscure the touches at locations 820 and 824. With respect to the camera assembly 804, the touch at location 818 may at least partially obscure the touches at locations 820 and 822. Therefore, a separate touch at location 820 may not be detected by either of the camera assemblies 802 and 804. With the addition of the camera assembly 806, however, the touch at location 820 is detected. Similarly, with respect to the camera assembly 806, the touches at locations 816 and 818 may at least partially obscure the touches at locations 822 and 824, respectively. However, in this configuration camera assembly 802 would detect the touch at location 822 and camera assembly 804 would detect the touch at location 824.
To detect an increased number of simultaneous touches and/or to decrease potential blind spots formed by touches, one or more additional camera assemblies (not shown) may be mounted proximate at least one of the other two corners 838 and 840 or proximate the sides 828, 830 and 832 of the touch screen 810.
In some embodiments, one of the camera assemblies, such as the camera assembly 806, may be replaced by a webcam (for example, standard video camera) or other visual detecting apparatus that may operate in the visible wavelength range. For example, the color filters on some video color cameras may have an IR response if not combined with an additional IR blocking filter. Therefore, a custom optic may include an IR blocking filter in the webcam channel and still have an IR response in the light sensing channels. The webcam may be separate from or integrated with the system 800. A portion of a FOV of the webcam may be used for detecting data used to determine coordinate locations of one or more touch within the touch sensing plane 170 (and/or on the touch surface 102) and/or Z-axis detection while still providing remote viewing capability, such as video image data of the users of the system 800 and possibly the surrounding area. By way of example only, a split-field optic may be used wherein one or more portions or areas of the optic of the webcam is used for touch detection and/or Z-axis detection and other portions of the optic of the webcam are used for acquiring video information. In some embodiments, the webcam may include optical components similar to those discussed previously with respect to the camera assemblies and may also include a light source. In some embodiments, the resolution and frame rate of the camera may be selected based on the resolution needed for determining multiple touches and gestures.
In some embodiments, the image sensor 130 may be used together with a simple lens, prism and/or mirror(s) to form a camera assembly detecting one FOV. In other embodiments, the image sensor 130 may be used together with more than one simple lens or prism to form a camera assembly that detects more than one FOV. Additionally, camera assemblies that use simple lens or prism may be used together in the same touch system as camera assemblies that use more complex configurations that utilize multiple optical components and/or multiple optical surfaces to detect multiple fields of view.
It is to be understood that the above description is intended to be illustrative, and not restrictive. For example, the above-described embodiments (and/or aspects thereof) may be used in combination with each other. In addition, many modifications may be made to adapt a particular situation or material to the teachings of the invention without departing from its scope. This written description uses examples to disclose the invention, including the best mode, and also to enable any person skilled in the art to practice the invention, including making and using any devices or systems and performing any incorporated methods. While the dimensions and types of materials described herein are intended to define the parameters of the invention, they are by no means limiting and are exemplary embodiments. Many other embodiments will be apparent to those of skill in the art upon reviewing the above description. The scope of the invention should, therefore, be determined with reference to the appended claims, along with the full scope of equivalents to which such claims are entitled. In the appended claims, the terms “including” and “in which” are used as the plain-English equivalents of the respective terms “comprising” and “wherein.” Moreover, in the following claims, the terms “first,” “second,” and “third,” etc. are used merely as labels, and are not intended to impose numerical requirements on their objects. Further, the limitations of the following claims are not written in means-plus-function format and are not intended to be interpreted based on 35 U.S.C. §112, sixth paragraph, unless and until such claim limitations expressly use the phrase “means for” followed by a statement of function void of further structure.