Interactive input system

Information

  • Patent Grant
  • 8692768
  • Patent Number
    8,692,768
  • Date Filed
    Friday, July 10, 2009
    15 years ago
  • Date Issued
    Tuesday, April 8, 2014
    10 years ago
Abstract
A method for resolving ambiguities between at least two pointers in a plurality of input regions defining an input area of an interactive input system. The method includes capturing images of the plurality of input regions, the images captured by a plurality of imaging devices having a field of view of at least a portion of the input area, processing image data from the images to identify a plurality of targets for the at least two pointers within the input area, and analyzing the plurality of targets to resolve a real location associated with each pointer.
Description
FIELD OF THE INVENTION

The present invention relates generally to input systems and in particular to a multi-touch interactive input system.


BACKGROUND OF THE INVENTION

Interactive input systems that allow users to inject input such as for example digital ink, mouse events etc. into an application program using an active pointer (eg. a pointer that emits light, sound or other signal), a passive pointer (eg. a finger, cylinder or other object) or other suitable input device such as for example, a mouse or trackball, are well known. These interactive input systems include but are not limited to: touch systems comprising touch panels employing analog resistive or machine vision technology to register pointer input such as those disclosed in U.S. Pat. Nos. 5,448,263; 6,141,000; 6,337,681; 6,747,636; 6,803,906; 7,232,986; 7,236,162; and 7,274,356 and in U.S. Patent Application Publication No. 2004/0179001 assigned to SMART Technologies ULC of Calgary, Alberta, Canada, assignee of the subject application, the contents of which are incorporated by reference; touch systems comprising touch panels employing electromagnetic, capacitive, acoustic or other technologies to register pointer input; tablet personal computers (PCs); laptop PCs; personal digital assistants (PDAs); and other similar devices.


Above-incorporated U.S. Pat. No. 6,803,906 to Morrison et al. discloses a touch system that employs machine vision to detect pointer interaction with a touch surface on which a computer-generated image is presented. A rectangular bezel or frame surrounds the touch surface and supports digital cameras at its four corners. The digital cameras have overlapping fields of view that encompass and look generally across the touch surface. The digital cameras acquire images looking across the touch surface from different vantages and generate image data. Image data acquired by the digital cameras is processed by on-board digital signal processors to determine if a pointer exists in the captured image data. When it is determined that a pointer exists in the captured image data, the digital signal processors convey pointer characteristic data to a master controller, which in turn processes the pointer characteristic data to determine the location of the pointer in (x,y) coordinates relative to the touch surface using triangulation. The pointer coordinates are then conveyed to a computer executing one or more application programs. The computer uses the pointer coordinates to update the computer-generated image that is presented on the touch surface. Pointer contacts on the touch surface can therefore be recorded as writing or drawing or used to control execution of application programs executed by the computer.


In environments where the touch surface is small, more often than not, users interact with the touch surface one at a time, typically using a single pointer. In situations where the touch surface is large, as described in U.S. patent application Ser. No. 10/750,219 to Hill et al., assigned to SMART Technologies ULC, the content of which is incorporated by reference, multiple users may interact with the touch surface simultaneously.


As will be appreciated, in machine vision touch systems, when a single pointer is in the fields of view of multiple imaging devices, the position of the pointer in (x,y) coordinates relative to the touch surface typically can be readily computed using triangulation. Difficulties are however encountered when multiple pointers are in the fields of view of multiple imaging devices as a result of pointer ambiguity and occlusion. Ambiguity arises when multiple pointers in the images captured by the imaging devices cannot be differentiated. In such cases, during triangulation a number of possible positions for the pointers can be computed but no information is available to the system to allow the correct pointer positions to be selected. Occlusion occurs when one pointer occludes another pointer in the field of view of an imaging device. In these instances, the image captured by the imaging device includes only one pointer. As a result, the correct positions of the pointers relative to the touch surface cannot be disambiguated from false target positions. Increasing the number of imaging devices allows pointer ambiguity and occlusion to be resolved but this of course results in increased touch system cost and complexity. The placement of the additional cameras with different vantages improves disambiguation as well as difficulties with occlusion as the field of view of each set of cameras looks at the pointers at a different angle. The additional cameras also improve the triangulation of the pointers.


It is therefore an object of the present invention to provide a novel interactive input system.


SUMMARY OF THE INVENTION

Accordingly, in one aspect there is provided in an interactive input system, a method of resolving ambiguities between at least two pointers in a plurality of input regions defining an input area comprising:


capturing images of the plurality of input regions, the images captured by a plurality of imaging devices having a field of view of a portion of the input area;


processing image data from the images to identify a plurality of targets for the at least two pointers within the input area; and


analyzing the plurality of targets to resolve a real location associated with each pointer.


According to another aspect there is provided in an interactive input system, a method of resolving ambiguities between at least two pointers in a plurality of input regions defining an input area comprising:


capturing images of the plurality of input regions, the images captured by a plurality of imaging devices having a field of view of at least a portion of the input area;


processing image data from the images to identify a plurality of potential targets for the at least two pointers within the input area, the plurality of potential targets comprising real and phantom targets; and


determining a pointer location for each of the at least two pointers utilizing the plurality of targets.


According to another aspect there is provided an interactive input system comprising:


an input surface defining an input area; and


a plurality of imaging devices having at least partially overlapping fields of view encompassing a plurality of input regions within the input area.


According to another aspect there is provided an interactive input system comprising:


at least one imaging device mounted on the periphery of the display surface having a field of view encompassing a region of interest associated with the display surface;


a bezel disposed around the periphery of a display surface, defining an input area, the bezel having an inwardly facing diffusive surface normal to the display surface, the bezel positioned proximate to the at least one imaging device;


at least one light source disposed within the bezel to illuminate the region of interest.


According to another aspect there is provided a bezel for an interactive input system comprising at least one bezel segment to be disposed around the periphery of an input surface, the at least one bezel segment having a front surface, two opposing side surfaces, a top surface, a bottom surface, and a back surface, and the back surface tapering towards the midpoint of the bezel.


According to another aspect, there is provided an interactive input system comprising:


an input surface having at least two input areas;


a plurality of imaging devices having at least partially overlapping fields of view encompassing at least one input regions within the input area; and


a processing structure for processing image data acquired by the imaging devices to track the position of at least two pointers within the input regions and resolve ambiguities between the pointers.


According to another aspect there is provided an interactive input system comprising:


an input surface defining an input area; and


at least three imaging devices having at least partially overlapping fields of view encompassing at least one input region within the input area;


a processing structure for processing images acquired by the imaging devices to track the position of at least two pointers within the at least one input region, assigning a weight to each image, and resolve ambiguities between the pointers based on each weighted image.


According to another aspect there is provided in an interactive input system, a method comprising:


capturing images of a plurality of input regions, the images captured by a plurality of imaging devices having a field of view of a portion of the plurality of input regions;


processing image data from the images to identify a plurality of targets for the at least two pointers within the input area;


determining a state for each image of each target and assigning a weight to the image data of each image based on the state; and


calculating a pointer location for each of the at least two pointers based on the weighted image data.


According to another aspect there is provided a computer readable medium embodying a computer program for resolving ambiguities between at least two pointers in a plurality of input regions defining an input area in an interactive input system, the computer program code comprising:


program code for resolving ambiguities between at least two pointers in a plurality of input regions defining an input area comprising:


program code for capturing images of the plurality of input regions, the images captured by a plurality of imaging devices having a field of view of at least a portion of the input area;


program code for processing image data from the images to identify a plurality of targets for the at least two pointers within the input area; and


program code for analyzing the plurality of targets to resolve a real location associated with each pointer.


According to another aspect there is provided a computer readable medium embodying a computer program for resolving ambiguities between at least two pointers in a plurality of input regions defining an input area in an interactive input system, the computer program code comprising:


program code for capturing images of the plurality of input regions, the images captured by a plurality of imaging devices having a field of view of at least a portion of the input area;


program code for processing image data from the images to identify a plurality of potential targets for the at least two pointers within the input area, the plurality of potential targets comprising real and phantom targets; and


program code for determining a pointer location for each of the at least two pointers utilizing the plurality of targets.


According to yet another aspect there is provided a computer readable medium embodying a computer program for resolving ambiguities between at least two pointers in a plurality of input regions defining an input area in an interactive input system, the computer program code comprising:


program code for capturing images of a plurality of input regions, the images captured by a plurality of imaging devices having a field of view of a portion of the plurality of input regions;


program code for processing image data from the images to identify a plurality of targets for the at least two pointers within the input area;


program code for determining a state for each image of each target and assigning a weight to the image data of each image based on the state; and


program code for calculating a pointer location for each of the at least two pointers based on the weighted image data.





BRIEF DESCRIPTION OF THE DRAWINGS

Embodiments will now be described more fully with reference to the accompanying drawings in which:



FIG. 1 is a perspective view of an interactive input system;



FIG. 2 is another perspective view of the interactive input system of FIG. 1 with the cover removed to expose the imaging devices and illuminated bezel surrounding the input area;



FIG. 3 is yet another perspective view of the interactive input system of FIG. 1 with the cover removed;



FIG. 4 is an enlarged perspective view of a portion of the interactive input system of FIG. 1 with the cover removed;



FIG. 5 is a top plan view showing the imaging devices and illuminated bezel that surround the input area;



FIG. 6 is a side elevational view of a portion of the interactive input system of FIG. 1 with the cover removed;



FIG. 7 is a top plan view showing the imaging devices and input regions of the input area;



FIG. 8 is a schematic block diagram of one of the imaging devices;



FIG. 9 is a schematic block diagram of a master processor;



FIGS. 10
a and 10b are perspective and top plan views, respectively, of a bezel segment forming part of the illuminated bezel;



FIG. 10
c shows a generic diffuser on the bezel segment of FIGS. 10a and 10b;



FIG. 11
a is a front elevational view of an exemplary illuminated bezel segment showing the dimple pattern of the diffusive front surface;



FIGS. 11
b and 11c are front elevational views of the illuminated bezel segment showing alternative dimple patterns of the diffusive front surface;



FIG. 12 is a perspective view of a portion of the illuminated bezel segment showing an alternative diffusive front surface;



FIG. 13 is a flow chart showing the steps performed by during a candidate generation procedure;



FIG. 14 is an observation table built by the candidate generation procedure;



FIG. 15 is a flow chart showing the steps performed during an association procedure;



FIG. 16 shows an example of multiple target tracking;



FIGS. 17 and 18 show two targets contacting the input area 62 and the weights assigned to the observations associated with the targets;



FIGS. 19 to 24 show multiple target scenarios, determined centerlines for each target observation and the weights assigned to the target observations;



FIG. 25 is a flow chart showing the steps performed during triangulation of real and phantom targets;



FIGS. 26 to 34 show alternative imaging device configurations for the interactive input system;



FIGS. 35 to 40 show alternative embodiments of the bezel segments of the interactive input system; and



FIG. 41 shows exemplary frame captures of the input area showing the three possible states for multiple targets as seen by an imaging device.





DETAILED DESCRIPTION OF THE EMBODIMENTS

Turning now to FIGS. 1 to 6, an interactive input system is shown and is generally identified by reference numeral 50. In this embodiment, the interactive input system 50 is in the form of a touch table that is capable of detecting and tracking individually eight (8) different pointers or targets brought into proximity of the touch table. As can be seen touch table 50 comprises a rectangular box-like housing 52 having upright sidewalls 54 and a top wall 56. A liquid crystal display (LCD) or plasma panel 60 is centrally positioned on the top wall 56 and has a display surface that defines an input area 62. Alternatively, a projector-based table could be used. Imaging devices 70a to 70f are mounted on the LCD panel 60 about the input area 62 and look generally across the input area from different vantages. An illuminated bezel 72 surrounds the periphery of the input area 62 and overlies the imaging devices 70a to 70f. The illuminated bezel 72 provides backlight illumination into the input area 62. A cover 74 overlies the illuminated bezel 72.


In this embodiment, each of the imaging devices 70a to 70f is in the form of a digital camera device that has a field of view of approximately 90 degrees. The imaging devices 70a to 70d are positioned adjacent the four corners of the input area 62 and look generally across the entire input area 62. Two laterally spaced imaging devices 70e and 70f are positioned along one major side of the input area 62 intermediate the imaging devices 70a and 70b. The imaging devices 70e and 70f are angled in opposite directions and look towards the center of the input area 62 so that each imaging device 70e and 70f looks generally across two-thirds of the input area 62. This arrangement of imaging devices divides the input area 62 into three input regions, namely a left input region 62a, a central input region 62b and a right input region 62c as shown in FIGS. 5 and 7. The left input region 62a is within the fields of view of five (5) imaging devices, namely imaging devices 70a, 70b, 70c, 70d and 70f. The right input region 62c is also within the fields of view of five (5) imaging devices, namely imaging devices 70a, 70b, 70c, 70d and 70e. The central input region 62b is within the fields of view of all six (6) imaging devices 70a to 70f.



FIG. 8 is a schematic block diagram of one of the imaging devices. As can be seen, the imaging device includes a two-dimensional CMOS camera image sensor 100 having associated lens assembly and a digital signal processor (DSP) 106. The parallel peripheral port 107 of the DSP 106 is coupled to the CMOS camera image sensor 100 parallel port by a data bus 108. Serial control bus 110 carries configuration information between the DSP 106 and the CMOS camera image sensor 100. A boot EPROM 112 and a power supply subsystem 114 are also included. Alternatively, CMOS line scan sensors could be used.


The CMOS camera image sensor 100 in this embodiment is an Aptina MT9V022 image sensor configured for a 30×752 pixel sub-array that can be operated to capture image frames at high frame rates including those in excess of 960 frames per second. The DSP 106 is manufactured by Analog Devices under part number ADSP-BF524.


The DSP 106 provides control information to the image sensor 100 via the control bus 110. The control information allows the DSP 106 to control parameters of the CMOS camera image sensor 100 and lens assembly such as exposure, gain, array configuration, reset and initialization. The DSP 106 also provides clock signals to the CMOS camera image sensor 100 to control the frame rate of the image sensor. The DSP 106 also communicates image information acquired from the image sensor 100 to a master controller 120 via a serial port 116.


Each of the imaging devices 70a to 70f communicates with the master processor 120 which is best shown in FIG. 9. Master controller 120 is accommodated by the housing 52 and includes a DSP 122, a boot EPROM 124, a serial line driver 126 and a power supply subsystem 128. The DSP 122 communicates with the DSPs 106 of each of the imaging device 70a to 70f over a data bus 130 and via a serial port 132. The DSP 122 also communicates with a processing device 140 accommodated by the housing 52 via a data bus 134, a serial port 136, and the serial line driver 126. In this embodiment, the DSP 122 is also manufactured by Analog Devices under part number ADM222. The serial line driver 138 is manufactured by Analog Devices under part number ADM222.


The master controller 120 and each imaging device follow a communication protocol that enables bi-directional communications via a common serial cable similar to a universal serial bus (USB). The transmission bandwidth is divided into thirty-two (32) 16-bit channels. Of the thirty-two channels, four (4) channels are assigned to each of the DSPs 106 in the imaging devices 70a to 70f and to the DSP 122 in the master controller 120. The remaining channels are unused and may be reserved for further expansion of control and image processing functionality (e.g., use of additional imaging devices). The master controller 120 monitors the channels assigned to the DSPs 106 while the DSP 106 in each of the imaging devices monitors the five (5) channels assigned to the master controller DSP 122. Communications between the master controller 120 and each of the imaging devices 70a to 70f are performed as background processes in response to interrupts.


In this embodiment, the processing device 140 is a general purpose computing device. The computing device comprises for example a processing unit, system memory (volatile and/or non-volatile memory), other removable or non-removable memory (hard drive, RAM, ROM, EEPROM, CD-ROM, DVD, flash memory, etc.), and a system bus coupling various components to the processing unit. The processing unit runs a host software application/operating system and provides display output to the LCD panel 60. During execution of the host software application/operating system, a graphical user interface is presented on the display surface of the LCD panel 60 allowing one or more users to interact with the graphical user interface via touch input on the display area 62.


The illuminated bezel 72 comprises four bezel segments 200a to 200d with each bezel segment extending substantially the entire length along one side of the input area 62. FIGS. 10a to 10c better illustrate the bezel segment 200a. As can be seen, the bezel segment is formed of a homogeneous piece of clear, light transmissive material such as for example Lexan®, Plexiglas, acrylic or other suitable material. The bezel segment 200a comprises a front surface 212 that extends substantially the entire length along one major side of the input area 62, a back surface 214, two side surfaces 216, a top surface 218 and a bottom surface 220. The front, back and side surfaces of the bezel segment 200a are generally normal to the plane of the input area 62. Each side surface 216 has a pair of laterally spaced bores formed therein that accommodate light sources. In this particular embodiment, the light sources are infrared (IR) light emitting diodes (LEDs) 222 although LEDs that emit light at different wavelengths may be used. The top, bottom, side and back surfaces of the bezel segment 200a are coated with a reflective material to reduce the amount of light that leaks from the bezel segment via these surfaces. The front surface 212 of the bezel segment 200a is textured or covered with a diffusive material to produce a diffusive surface that allows light to escape from the bezel segment into the input area 62.


The geometry of the bezel segment 200a is such that the reflective back surface 214 is v-shaped with the bezel segment being most narrow at its midpoint. As a result, the reflective back surface 214 defines a pair of angled reflective surface panels 214a and 214b with the ends of the panels that are positioned adjacent the center of the bezel segment 200a being closer to the front surface 212 than the opposite ends of the surface panels. Optionally, the front surface 212 can be a diffusive material (using film, surface modifications, lenses, paint, paper, or other type of diffusive element known to those of skill in the art), in order to scatter the light more evenly. This bezel segment configuration compensates for the attenuation of light emitted by the IR LEDs 222 that propagates through the body of the bezel segment 200a by tapering towards the midpoint of the bezel segment 200a. The luminous emittance of the bezel segment 200a is maintained generally at a constant across the surface 212 by reducing the volume of the bezel segment 200a further away from the IR LEDs 222 where the attenuation has diminished the light flux. By maintaining the luminous emittance generally constant across the bezel, the amount of light leaking out the side 212 is a generally uniform density. This helps to make the bezel segment 200a illumination appear uniform to the imaging devices 70a to 70f.


Shallow notches 224 are provided in the bottom surface 220 of the bezel segment 200a to accommodate the imaging devices 70a, 70e, 70f and 70b. In this manner, the imaging devices are kept low relative to the front surface 212 so that the imaging devices block as little of the light escaping the bezel segment 200a via the diffusive front surface 212 as possible while still being able to view across the input surface, and thus, the height of the bezel segment can be reduced. An alternative embodiment may have slots within the bezel segments to accommodate the imaging devices.



FIG. 11
b and 11c show alternative dimple patterns provided on the front surface 212 of the bezel segment with the density of the dimples 226′ and 226″ increasing towards the center of the bezel segment to allow more light to escape the center of the bezel segment as compared to the ends of the bezel segment. This type of pattern could be used with the v-shaped bezel or not. FIG. 12 shows an alternative textured front surface 212′ configured to allow more light to escape the center of the bezel segment as compared to the ends of the bezel segment. As can be seen, in this embodiment spaced vertical grooves or slits 228 are formed in the front surface 212′ with the density of the grooves or slits 228 increasing towards the center of the bezel segment. Although not shown, rather than texturing the front surface of the bezel segment, a film may be placed on the front surface that is configured to allow more light to escape the center of the bezel segment as compared to the ends of the bezel segment.


The bezel segment 200c extending along the opposite major side of the input area 62 has a similar configuration to that described above with the exception that the number and positioning of the notches 224 (or alternatively slots) is varied to accommodate the imaging devices 70c and 70d that are covered by the bezel segment. The bezel segments 200b and 200d extending along the shorter sides of the input area 62 also have a similar configuration to that described above with the exceptions that the side surfaces of the bezel segments only accommodate a single IR LED 222 (as the lighting requirements are reduced due to the decreased length) and the number and the positioning of the notches 224 is varied to accommodate the imaging devices that are covered by the bezel segments.


During general operation of the interactive input system 50, the IR LEDs 222 of the bezel segments 200a to 200d are illuminated resulting in infrared backlighting escaping from the bezel segments via the front surfaces into the input area 62. As mentioned above, the design of the bezel segments 200a to 200d is configured such that the illumination escaping each bezel segment is generally even along the length of the bezel segment. Each imaging device which looks across the input area 62 is conditioned by its associated DSP 106 to acquire image frames. When no pointer or target is in the field of view of an imaging device, the imaging device sees the infrared backlighting emitted by the bezel segments and thus, generates a “white” image frame. When a pointer occludes infrared backlighting emitted by at least one of the bezel segments, the target appears in the image frame as a “dark” region on a “white” background. For each imaging device, image data acquired by its image sensor 100 is processed by the DSP 106 to determine if one or more targets (e.g. pointers) is/are believed to exist in each captured image frame. When one or more targets is/are determined to exist in a captured image frame, pointer characteristic data is derived from that captured image frame identifying the target position(s) in the captured image frame.


The pointer characteristic data derived by each imaging device is then conveyed to the master controller 120. The DSP 122 of the master controller in turn processes the pointer characteristic data to allow the location(s) of the target(s) in (x,y) coordinates relative to the input area 62 to be calculated.


The calculated target coordinate data is then reported to the processing device 140, which in turn records the target coordinate data as writing or drawing if the target contact(s) is/are write events or injects the target coordinate data into the active application program being run by the processing device 140 if the target contact(s) is/are mouse events. As mentioned above, the processing device 140 also updates the image data conveyed to the LCD panel 60 so that the image presented on the display surface of the LCD panel 60 reflects the pointer activity.


When a single pointer exists in the image frames captured by the imaging devices 70, the location of the pointer in (x,y) coordinates relative to the input area 62 can be readily computed using triangulation. When multiple pointers exist in the image frames captured by the imaging devices 70, computing the positions of the pointers in (x,y) coordinates relative to the input area 62 is more challenging as a result of the pointer ambiguity and occlusion issues.


Pointer ambiguity arises when multiple targets are in contact with the input area 62 at different locations and are within the fields of view of multiple imaging devices. If the targets do not have distinctive markings to allow them to be differentiated, the observations of the targets in each image frame produce real and false target results that cannot be readily differentiated.


Pointer occlusion arises when a target in the field of view of an imaging device occludes another target in the field of view of the imaging device, resulting in observation merges as will be described.


Depending on the position of an imaging device relative to the input area 62 and the position of a target within the field of view of the imaging device, an imaging device may or may not see a target brought into its field of view adequately to enable image frames acquired by the imaging device to be used to determine the position of the target relative to the input area 62. Accordingly, for each imaging device, an active zone with the field of view of the imaging device is defined. The active zone is an area that extends to a distance of radius away from the imaging device. This distance is pre-defined and based on how well a camera can measure an object at a certain distance. When a target appears in the active zone of the imaging device, image frames acquired by the imaging device are deemed to observe the target sufficiently such that the observation of the target within the image frame captured by the imaging device is processed. When a target is within the field of view of an imaging device but is beyond the active zone of the imaging device, the observation of the target is ignored. When a target is within the radius ‘r’ but outside the field of view of the imaging device, it will not be seen and that imaging device is not used in further to process the position of the target.


When a target appears in the field of view of an imaging device, the observation of the target defined by the area formed between two straight lines is created by the master processor, namely one line that extends from the imaging device to the bezel segment and crosses the left edge of the target, and another line that extends from the imaging device to the bezel segment and crosses the right edge of the target. When two or more imaging devices observe the same target and the observations created by the imaging devices overlap, the overlapping region of the observations is referred to as a candidate. The lines defining the perimeter of a candidate are referred to as a bounding box. If another observation partially overlaps a candidate, the bounding box is updated to include only the area where all observations overlap. At least two observations are needed to create a bounding box.


When a target is in an input region of the input area 62 and all imaging devices whose fields of view encompass the input region and whose active zones include at least part of the target, create observations that overlap, the resulting candidate is deemed to be a consistent candidate. The consistent candidate may represent a real target or a phantom target.


The master processor 120 executes a candidate generation procedure to determine if any consistent candidates exist in captured image frames. FIG. 13 illustrates steps performed during the candidate generation procedure. During the candidate generation procedure, a table is initially generated, or “built”, that lists all imaging device observations so that the observations generated by each imaging device can be cross referenced with all other observations to see if one or more observations result in a candidate (step 300).


As the interactive input system 50 includes six (6) imaging devices 70a to 70f and is capable of simultaneously tracking eight (8) targets, the maximum number of candidates that is possible is equal to nine-hundred and sixty (960). For ease of illustration, FIG. 14 shows an exemplary table identifying three imaging devices with each imaging device generating three observations. Cells of the table with an X indicate observations that are not cross-referenced with each other. For example, imaging device observations cannot be cross-referenced with any of their own observations. Cells of the table that are redundant are also not cross-referenced. In FIG. 14, cells of the table designated with a “T” are processed. In this example of three imaging devices and three targets, the maximum number of candidates to examine is twenty-seven (27). Once the table has been created at step 300, the table is examined from left to right and starting on the top row and moving downwards to determine if the table includes a candidate (step 302). If the table is determined to be empty (step 304), the table therefore does not include any candidates, and the candidate generation procedure ends (step 306).


At step 304, if the table is not empty and a candidate is located, a flag is set in the table for the candidate and the lines that make up the bounding box for the candidate resulting from the two imaging device observations are defined (step 308). A check is then made to determine if the position of the candidate is completely off the input area 62 (step 310). If the candidate is determined to be completely clear of the input area 62, the flag that was set in the table for the candidate is cleared (step 312) and the procedure reverts back to step 302 to determine if the table includes another candidate.


At step 310, if the candidate is determined to be partially or completely on the input area 62, a list of the imaging devices that have active zones encompassing at least part of the candidate is created excluding the imaging devices whose observations were used to create the bounding box at step 308 (step 314). Once the list of imaging devices has been created, the first imaging device in the list is selected (step 316). For the selected imaging device, each observation created for that imaging device is examined to see if it intersects with the bounding box created at step 308 (steps 318 and 320). If no observation intersects the bounding box, the candidate generation procedure reverts back to step 312 and the flag that was set in the table for the candidate is cleared. At step 320, if an observation that intersects the bounding box is located, the bounding box is updated using the lines that make up the observation (step 322). A check is then made to determine if another non-selected imaging device exists in the list (step 324). If so, the candidate generation procedure reverts back to step 316 and the next imaging device in the list is selected.


At step 324, if all of the imaging devices have been selected, the candidate is deemed to be a consistent candidate and is added to a consistent candidate list (step 326). Once the candidate has been added to the consistent candidate list, the combinations of observations that relate to the consistent candidate are removed from the table (step 328). Following this, the candidate generation procedure reverts back to step 302 to determine if another candidate exists in the table. As it will be appreciated, the candidate generation procedure generates a list of consistent candidates representing targets that are seen by all of the imaging devices whose fields of view encompass the target locations. For example, a consistent candidate resulting from a target in the central input region 62b is seen by all six imaging devices 70a to 70f whereas a consistent candidate resulting from a target in the left or right input region 62a or 62c is only seen by five imaging devices.


The master processor 120 also executes an association procedure as best shown in FIG. 15 to associate candidates with existing targets. During the association procedure, a table is created that contains the coordinates of the predicted target locations generated by the tracking procedure as will be described, and the location of the consistent candidates in the consistent candidate list created during the candidate generation procedure (step 400). A check is then made to determine if all of the consistent candidates have been examined (step 402). If it is determined that all of the consistent candidates have been examined, any predicted targets that are not associated with a consistent candidate are deemed to be associated with a dead path. As a result, the predicted target location and previous tracks associated with these predicted targets are deleted (step 404) and the association procedure is terminated (step 406).


At step 402, if it is determined that one or more of the consistent candidates have not been examined, the next unexamined consistent candidate in the list is selected and the distance between the consistent candidate and all of the predicted target locations is calculated (step 408). A check is then made to determine whether the distance between the consistent candidate and a predicted target location falls within a threshold (step 410). If the distance falls within the threshold, the consistent candidate is associated with the predicted target (step 412). Alternatively, if the distance is beyond the threshold, the consistent candidate is labeled as a new target (step 414). Following either of steps 412 and 414, the association procedure reverts back to step 402 to determine if all of the consistent candidates in the consistent candidate list have been selected. As a result, the association procedure identifies each candidate as either a new target contacting the input area 62 or an existing target.



FIG. 16 shows an example of the interactive input system 50 tracking three objects; A, B and C. The locations of four previously triangulated targets for objects A, B and C are represented by an X. From these previously tracked target locations, an estimate (e.g. predicted target location) is made for where the location of the object should appear in the current frame, and is represented by a +. Since a user can manipulate an object on the input area 62 at an approximate maximum velocity of 4 m/s, and if the interactive input system 50 is running at 100 frames per second, then the actual location of the object should appear within [400 cm/s/100 frames/s×1 frame=4 cm] 4 centimeters of the predicted target location. This threshold is represented by a broken circle surrounding the predicted target location. Objects B and C are both located within the threshold of their predicted target locations and are thus associated with those respective previously tracked target location. The threshold around the predicted target location of object A does not contain object A, and is therefore considered a dead track and no longer used in subsequent image processing. Object D is seen at a position outside all of the calculated thresholds and is thus considered a new object and will continue to be tracked in subsequent frames.


The master processor 120 executes a state estimation procedure to determine the status of each candidate, namely whether each candidate is clear, merged or irrelevant. If a candidate is determined to be merged, a disentanglement process is initiated. During the disentanglement process, the state metrics of the targets are computed to determine the positions of partially and completely occluded targets. Initially, during the state estimation procedure, the consistent candidate list generated by the candidate generation procedure, the candidates that have been associated with existing targets by the association procedure, and the observation table are analyzed to determine whether each imaging device had a clear view of each candidate in its field of view or whether a merged view of candidates within its field of view existed. Candidates that are outside of the active areas of the imaging devices are deemed not relevant and flagged as being irrelevant.


The target and phantom track identifications from the previous frames are used as a reference to identify true target merges. When a target merge for an imaging device is deemed to exist, the disentanglement process for that imaging device is initiated. The disentanglement process makes use of the Viterbi algorithm. Depending on the number of true merges, the Viterbi algorithm assumes a certain state distinguishing between a merge of only two targets and a merge of more than two targets. In this particular embodiment, the disentanglement process is able to occupy one of the three states shown in FIG. 41, which depicts a four-input situation.


A Viterbi state transition method computes a metric for each of the three states. In this embodiment, the metrics are computed over five (5) image frames including the current image frame and the best estimate on the current state is given to the branch with the lowest level. The metrics are based on the combination of one dimensional predicted positions and widths with one dimensional merged observations. The state with the lowest branch is selected and is used to associate targets within a merge thereby to enable the predictions to disentangle merge observations.


For states 1 and 2, the disentanglement process yields the left and right edges for the merged targets. Only the center position for all the merges in state 3 is reported by the disentanglement process.


Once the disentanglement process has been completed, the states flag indicating a merge is cleared and a copy of the merged status before being cleared is maintained. To reduce triangulation inaccuracies due to disentanglement observations, a weighting scheme is used on the disentangled targets. Targets associated with clean observations are assigned a weighting of one (1). Targets associated with merged observations are assigned a weighting in the range from 0.5 to 0.1 depending on how far apart the state metrics are from each other. The greater the distance between state metrics, the higher the confidence of disentangling observations and hence the higher the weighting selected from the above range.



FIG. 17 shows an example of two objects, A and B, contacting the input area 62 and being viewed by imaging devices 70a to 70f. Imaging devices 70a, 70e and 70c all have two observations, one of object A and the other of object B. Imaging devices 70f, 70b, and 70d all have one observation. Since at least one imaging device shows two observations, the state estimation module determines that there must be two objects contacting the input area 62. Imaging devices 70a, 70e and 70c each see objects A and B clearly and so each observation is assigned a weight of 1.0. Imaging devices 70f, 70b and 70d observe only one object, determine that these two objects must appear merged, and assign a weight of 0.5 to each observation.



FIG. 18 shows objects A and B as viewed by imaging devices 70f and 70b. Since these objects appear merged to these imaging devices, the state estimation procedure approximates the actual position of the objects based on earlier data. From previous tracking information, the approximate widths of the objects are known. Since the imaging devices 70f and 70b are still able to view one edge of each of the objects, the other edge is determined based on the previously stored width of the object. The state estimation module calculates the edges of both objects for both imaging devices 70f and 70b. Once both edges of each object are known, the center line for each object from each imaging device is calculated.


As mentioned previously, the master processor 120 also executes a tracking procedure to track existing targets. During the tracking procedure, each target seen by each imaging device is examined to determine its center point and a set of radii. The set of radii comprises a radius corresponding to each imaging device that sees the target represented by a line extending from the optical axis of the imaging device to the center point. If a target is associated with an object, a Kalman filter is used to estimate the current state of the target and to predict its next state. This information is then used to backwardly triangulate the location of the target at the next time step which approximates an observation of the target if the target observation overlaps another target observation seen by the imaging device. If the target is not associated with a candidate, the target is considered dead and the target tracks are deleted from the track list. If the target is not associated with a candidate, and the number of targets is less than the maximum number of permitted targets, in this case eight (8), the candidate is considered to be a new target.



FIG. 19 shows an input situation, similar to that of FIGS. 16 to 18. The centerline for each imaging device observation of each target is shown along with the corresponding assigned weight. Note that the centerlines of targets A and C as seen from imaging device 70a can be determined, along with the centerline of targets B and C as seen from imaging device 70f. The centerline of targets A, B and C as seen from imaging device 70b could not be determined and as a result, the center of the merged observation is used for the centerline. The value of the weight assigned to these observations is very low.



FIG. 20 shows the triangulated location of target A from the centerlines of the observations from imaging devices 70a, 70f and 70b. Since imaging device 70f has a clear view of the target A and has an observation with a high weight, the observation of imaging device 70a has a medium weight, and the observation of imaging device 70b has a low weight; the triangulated location is located closer to the intersection of the two lines with the higher weight since those observations are more reliable.


Similar to FIG. 20, FIG. 21 shows the centerline and triangulated position for target B. The triangulation is dominated by the highly weighted observations from imaging devices 70a and 70e.



FIG. 22 shows the centerline and triangulated position for target C. It is clearly shown that the triangulated position was insignificantly influenced by the low weighted observation of imaging device 70b.



FIG. 23 shows an example of when a low weighted observation becomes important for an input. In this scenario the target is located almost directly between imaging devices 70a and 70c, which both have a clear view of the target and corresponding highly weighted observations. Imaging device 70b has a low weighted observation due to an ambiguity such as that situation presented in FIG. 19. The triangulation result from two imaging devices, in this case 70a and 70c, triangulating a point directly or nearly directly between the two imaging devices is unreliable. In this case where one observation is lowly weighted, the observation is important because it provides an additional view of the target needed for triangulation. Even though the observation is low weighted, it is still better than no other observation at all.



FIG. 24 depicts a similar scenario to that of FIG. 19 but has two imaging devices with low weighted observations (imaging devices 70b and 70d) and one imaging device with a high weighted observation (imaging device 70c). The observations from imaging device 70b and 70d are averaged to result in a triangulated point between the two observations and along the observation from imaging device 70c. In this case the triangulated location uses both low weighted observations to better locate the target.



FIG. 25 shows the steps performed during triangulation of real and phantom targets captured on the input area. During triangulation, the number N of imaging devices being used to triangulate a target (X,Y) coordinate, a vector x of length N containing image x-positions from each imaging device, a 2N x 3 matrix Q containing the projection matrices P for each imaging device, where Q=[P1|P2|. . .|PN], and a vector w of length N containing the weights assigned to each image observation position in x are used (step 500). If the weight vectors w are not specified, the weight vectors are set to a value of one (1). A binary flag for each parallel line of sight is then set to zero (0) (step 502). A tolerance for the parallel lines of sight is set to 2ε, where ε is the difference between 1 and the smallest exactly representable number greater than one. This tolerance gives an upper bound on the relative error due to rounding of floating point numbers and is hardware dependent. A least-squares design matrix A(N x 2) and right-hand side vector b are constructed by looping over the N available imaging device views (step 504). During this process, a 2 x 3 camera matrix P is extracted for the current image frame. A row is added to the design matrix containing [P11−x·P21, P12−x·P22]. An element is added to b containing [x·P23−P10]. An N x N diagonal matrix W containing the weights w is then created. The determinant (typically constructed using the method outlined in Wolfram MathWorld of the weighted normal equations is computed and a check is made to determine whether or not it is less than the tolerance for parallelism according to det (W·A)T·(W·A)) 2ε(step 506). This test determines whether A has linearly dependent rows. If the determinant is less than the tolerance, the parallelism flag is set to one (1) and X and Y are set to empty matrices (step 508). Otherwise, the linear least-squares problem for X and Y are solved according to (W A)T (W A)X=(W A)Tb (step 510) where X=[X,Y]T and b is also a two-element vector. The errors σx and σy X and Y are computed from the square roots of the diagonal elements Cii of the covariance matrix C defined by C=σ2·((W·A)T·(W·A))−1, where σ1 is the RMS error of the fit (i.e. the square root of chi-squared).


If N=2, no errors are computed as the problem is exactly determined. A check is then made to determine if the triangulated point is behind any of the imaging devices (step 512). Using the triangulated position, the expected target position for each imaging device is computed according to xcal=P·X, where x contains the image position x and the depth λ. The second element of xcal is the depth λ from the imaging device to the triangulated point. If λ=0, the depth test flag is set to one (1) and zero (0) otherwise. If all components of xcal are negative, the double negative case is ignored. The computed X,Y, error values and test flags are then returned (step 514).


In the embodiment shown and described above, the interactive input system comprises six (6) imaging devices arranged about the input area 62 with four (4) imaging devices being positioned adjacent the corners of the input area and two imaging devices 70e and 70f positioned at spaced locations along the same side of the input area. Those of skill in the art will appreciate that the configuration and/or number of imaging devices employed in the interactive input system may vary to suit the particular environment in which the interactive input system is to be employed. For example, the imaging devices 70e and 70f do not need to be positioned along the same side of the input area. Rather, as shown in FIG. 26, imaging device 70e can be positioned along one side of the input area and imaging device 70f can be positioned along the opposite side of the input area.


Turning now to FIG. 27, an alternative imaging device configuration for the interactive input system is shown. In this configuration, the interactive input system employs four (4) imaging devices 70a, 70e, 70f, and 70b arranged along one side of the input area 62. Imaging devices 70a, 70b are positioned adjacent opposite corners of the input area 62 and look generally across the entire input area 62. The intermediate imaging devices 70e, 70f are angled in opposite directions towards the center of the input area 62 so that the imaging devices 70a, 70e, 70f and 70b look generally across the two-thirds of input area 62. This arrangement of imaging devices divides the input area 62 into three input regions, namely a left input region 62a, a central input region 62b and a right input region 62c as shown. The left input region 62a is within the fields of view of three imaging devices, namely imaging devices 70a, 70e, and 70b. The right input region 62c is also within the fields of view of three imaging devices, namely imaging devices 70a, 70f, and 70b. The central input region 62b is within the fields of view of all four imaging devices 70a, 70e, 70f and 70b.



FIG. 28 shows another alternative imaging device configuration for the interactive input system. In this configuration, the interactive input system employs four (4) imaging devices 70a, 70b, 70c, 70d with each imaging device being positioned adjacent a different corner of the input area 62 and looking generally across the entire input area 62. With this imaging device arrangement, the entire input area 62 is within the fields of view of all four imaging devices.



FIG. 29 shows yet another alternative imaging device configuration for the interactive input system. In this configuration, the interactive input system employs three (3) imaging devices 70a, 70b, 70c with each imaging device being positioned adjacent a different corner of the input area 62 and looking generally across the entire input area 62. With this imaging device arrangement, the entire input area is within the fields of view of all three imaging devices.


In FIG. 30, yet another alternative imaging device configuration for the interactive input system is shown. In this configuration, the interactive input system employs eight (8) imaging devices, with four imaging devices 70a, 70e, 70f, 70b being arranged along one major side of the input area 62 and with four imaging devices 70d, 70g, 70h, 70c being arranged along the opposite major side of the input area 62. Imaging devices 70a, 70b, 70c, 70d are positioned adjacent the corners of the input area and look generally across the entire input area. The intermediate imaging devices 70e, 70f, 70g, 70h along each major side of the input area are angled in opposite directions towards the center of the input area 62. This arrangement of imaging devices divides the input area into 3 input regions. The number in each input region identifies the number of imaging devices whose fields of view see the input region.



FIG. 31 shows yet another alternative imaging device configuration for the interactive input system. In this configuration, the interactive input system employs eight (8) imaging devices 70. Imaging devices 70a, 70b, 70c, 70d are positioned adjacent the corners of the input area 62 and look generally across the entire input area. Intermediate imaging devices 70f, 70g are positioned on opposite major sides of the input area and are angled in opposite directions towards the center of the input area 62. Intermediate imaging devices 70i, 70j are positioned on opposite minor sides of the input area 62 and are angled in opposite directions towards the center of the input area 62. This arrangement of imaging devices divides the input area into nine (9) input regions as shown. The number in each input region identifies the number of imaging devices whose fields of view see the input region.


In FIG. 32, yet another alternative imaging device configuration for the interactive input system is shown. In this configuration, the interactive input system employs twelve (12) imaging devices. Imaging devices 70a, 70b, 70c, 70d are positioned adjacent the corners of the input area 62 and look generally across the entire input area 62. Pairs of intermediate imaging devices 70e and 70f, 70g and 70h, 70i and 70k, 70j and 70l are positioned along each side of the input area and are angled in opposite directions towards the center of the input area 62. This arrangement of imaging devices divides the input area into nine (9) input regions as shown. The number in each input region identifies the number of imaging devices whose fields of view see the input region.



FIG. 33 shows yet another alternative imaging device configuration for the interactive input system. In this configuration, the interactive input system employs sixteen (16) imaging devices 70. Imaging devices 70a, 70b, 70c, 70d are positioned adjacent the corners of the input area and look generally across the entire input area. Pairs of intermediate imaging devices 70e and 70f, 70g and 70h, 70i and 70k, 70j and 70l are positioned along each side of the input area and are angled in opposite directions towards the center of the input area 62. Four midpoint imaging devices 70m, 70n, 70o, 70p are positioned at the midpoint of each side of the input area 62 and generally look across the center of the input area 62. This arrangement of imaging devices 70 divides the input area 62 into twenty-seven (27) input regions as shown. The number in each input region identifies the number of imaging devices whose fields of view see the input region.



FIG. 34 shows yet another alternative imaging device configuration for the interactive input system. In this configuration, the interactive input system employs twenty (20) imaging devices 70. Imaging devices 70a, 70b, 70c, 70d are positioned adjacent the corners of the input area and look generally across the entire input area. Pairs of intermediate imaging devices 70e and 70f, 70g and 70h, 70i and 70k, 70j and 70l are positioned along each side of the input area and are angled in opposite directions towards the center of the input area 62. Two further intermediate imaging devices 70q, 70r, 70s, 70t are positioned along each major side of the input area 62 and are angled in opposite directions towards the center of the input area 62. Four midpoint imaging devices 70m, 70n, 70o, 70p are positioned at the midpoint of each side of the input area 62 and generally look across the center of the input area 62. This arrangement of imaging devices divides the input area into thirty-seven (37) input regions as shown. The number in each input region identifies the number of imaging devices whose fields of view see the input region.


Although particular embodiments of the bezel segments have been described above, those of skill in the art will appreciate that many alternatives are available. For example, more or fewer IR LEDs may be provided in one or more of the bezel surfaces. For example, FIG. 35 shows an embodiment of the bezel segment generally identified by numeral 600 one surface accommodating a pair of IR LEDs 222a, 222b and the opposite surface accommodating a single IR LED 222c. If desired, rather than providing notches in the undersurface of the bezel segments, recesses 602 may be provided in the body of the bezel segments to accommodate the imaging devices as shown in FIG. 36. Of course a combination of notches and recesses may be employed.


In the above embodiments, each bezel segment has a planar front surface and a v-shaped back reflective surface. If desired, the configuration of one or more of the bezel segments can be reversed as shown in FIG. 37 so that the bezel segment 700 comprises a planar reflective back surface 204 and a v-shaped front surface 702. Optionally, the v-shaped front surface could be diffusive. Alternatively, the v-shaped back surface could be diffusive and the planar front surface could be transparent. In a further alternative embodiment bezel 800, instead of using a v-shaped back reflective surface, a parabolic shaped back reflective surface 802 may be used as shown in FIG. 40 or other similarly shaped back reflective surfaces may be used. FIG. 38 shows the interactive input system employing an illuminated bezel formed of a combination of bezel segments. In particular, bezel segment 700 is of the type shown in FIG. 37 while bezel segments 200b to 200d are of the type shown in FIGS. 1 to 6. If desired, for bezel segment, supplementary IR LEDs 222a, 222b may be accommodated by bores formed in the planar reflective back surface as shown in FIG. 39. In this case, the supplementary IR LEDs 222a, 222b are angled towards the center of the bezel segment.


In a still further embodiment, the textured front surface 212 has a dimple pattern arranged such that the density of dimples 226 increases towards the center of the bezel segment 200a as shown in FIG. 11a. This dimple pattern allows more light to escape the center of the bezel segment as compared to the ends of the bezel segment. As will be appreciated, if such a dimple pattern was not provided on the diffusive front surface 212, the majority of light emitted by the IR LEDs 222 would exit the bezel segment adjacent the ends of the diffusive front surface 212 closest to the IR LEDs 222 and as a result, the bezel segment illumination would not appear uniform to the imaging devices 70a to 70f.


Although embodiments of the interactive input system have been shown and described above, those of skill in the art will appreciate that further variations and modifications may be made without departing from the spirit and scope thereof as defined by the appended claims.

Claims
  • 1. An interactive input system comprising: an input surface having at least two input areas;a plurality of imaging devices having at least partially overlapping fields of view encompassing at least one input regions within the input area; anda processing structure configured to process image data acquired by the imaging devices to track the position of at least two pointers within the input regions by: identifying consistent candidates in the processed image data to facilitate resolving ambiguities between the pointers, each of the consistent candidates comprising observations located in image frames captured by all of the plurality of imaging devices having a field of view encompassing a target's input region;determining the position of the consistent candidates by assigning a weight to the observations based on the clarity of the observations;wherein the observations are clear when the imaging devices have a clear view of the observations and the consistent candidates are merged when the imaging devices have a merged view of a plurality of the observations, andwherein a first weight is assigned to the clear observations, the first weight having predefined value, and a second weight is assigned to the merged observations, the second weight being less than the first weight.
  • 2. The interactive input system of claim 1, wherein the processing structure further comprises an association procedure module to associate the consistent candidates with target associated with the at least two pointers.
  • 3. The interactive input system of claim 2, wherein the processing structure further comprises a tracking procedure module for tracking the targets in the at least two input regions.
  • 4. The interactive input system of claim 3, wherein the processing structure further comprises a state estimation module for determining locations of the at least two pointers based on information from the association procedure module and the tracking procedure module and image data from the plurality of imaging devices.
  • 5. The interactive input system of claim 4, wherein the processing structure further comprises a disentanglement process module for, when the at least two pointers appears merged, determining locations for each of the pointers based on information from the state estimation module, the tracking procedure module and image data from the plurality of imaging devices.
  • 6. The interactive input system of claim 1, wherein the processing structure uses weighted triangulation for processing the image data.
  • 7. The interactive input system of claim 6, wherein weights are assigned to the image data from each of the plurality of imaging devices.
  • 8. An interactive input system comprising: an input surface defining an input area; andat least three imaging devices having at least partially overlapping fields of view encompassing at least one input region within the input area;a processing structure for processing images acquired by the imaging devices to determine the position of at least two pointers within the at least one input region by assigning a weight to observations in each image based on the clarity of the observation, the observations representative of the at least two pointers, and triangulate the positions of the at least two pointers based on each weighted observation;wherein the observations are clear when the imaging devices have a clear view of the observations and the observations are merged when the imaging devices have a merged view of a plurality of the observations; andwherein a first weight is assigned to the clear observations, the first weight having predefined value, and a second weight is assigned to the merged observations, the second weight being less than the first weight.
  • 9. The interactive input system of claim 8, wherein the weighted triangulation resolves ambiguities in the observations.
  • 10. In an interactive input system, a method of resolving ambiguities between at least two pointers in a plurality of input regions defining an input area comprising: capturing images of a plurality of input regions, the images captured by a plurality of imaging devices having a field of view of a portion of the plurality of input regions;processing image data from the images to identify a plurality of targets for the at least two pointers within the input area;determining a state for each target of each image, the state indicating whether the target is clear or merged, wherein the target is clear when the imaging devices have a clear view of the target and the target is merged when the imaging devices have a merged view of a plurality of the targets;assigning a weight to each target of each image based on the determined state, wherein a first weight is assigned to the image data for the clear target, the first weight having predefined value, and a second weight is assigned to the image data for the merged target, the second weight being less than the first weight; andcalculating a pointer location for each of the at least two pointers based on the weighted target data.
  • 11. The method of claim 10, wherein the calculating is performed using weighted triangulation.
  • 12. The method of claim 10 further comprising determining real and phantom targets associated with each pointer.
  • 13. The method of claim 10, wherein the second weight is selected from a range of values, wherein a top value of the range of values is less than the first weight.
  • 14. A non-transitory computer readable medium embodying a computer program for resolving ambiguities between at least two pointers in a plurality of input regions defining an input area in an interactive input system, the computer program code operable to: receive images of a plurality of input regions, the images captured by a plurality of imaging devices having a field of view of a portion of the plurality of input regions;process image data from the images to identify a plurality of targets for the at least two pointers within the input area;determine a state for each target of each image, the state indicating whether the target is clear or merged, wherein the target is clear when the imaging devices have a clear view of the target and the target is merged when the imaging devices have a merged view of a plurality of the targets;assign a weight to each target of each image based on the determined state, wherein a first weight is assigned to the image data for the clear target, the first weight having predefined value, and a second weight is assigned to the image data for the merged target, the second weight being less than the first weight; andcalculate a pointer location for each of the at least two pointers based on the weighted target data.
US Referenced Citations (239)
Number Name Date Kind
4107522 Walter Aug 1978 A
4144449 Funk et al. Mar 1979 A
4247767 O'Brien et al. Jan 1981 A
4507557 Tsikos Mar 1985 A
4558313 Garwin et al. Dec 1985 A
4672364 Lucas Jun 1987 A
4737631 Sasaki et al. Apr 1988 A
4742221 Sasaki et al. May 1988 A
4746770 McAvinney May 1988 A
4762990 Caswell et al. Aug 1988 A
4782328 Denlinger Nov 1988 A
4818826 Kimura Apr 1989 A
4820050 Griffin Apr 1989 A
4822145 Staelin Apr 1989 A
4831455 Ishikawa May 1989 A
4868912 Doering Sep 1989 A
4980547 Griffin Dec 1990 A
5025314 Tang et al. Jun 1991 A
5097516 Amir Mar 1992 A
5109435 Lo et al. Apr 1992 A
5130794 Ritchey Jul 1992 A
5140647 Ise et al. Aug 1992 A
5162618 Knowles Nov 1992 A
5168531 Sigel Dec 1992 A
5196835 Blue et al. Mar 1993 A
5239373 Tang et al. Aug 1993 A
5317140 Dunthorn May 1994 A
5359155 Helser Oct 1994 A
5374971 Clapp et al. Dec 1994 A
5414413 Tamaru et al. May 1995 A
5448263 Martin Sep 1995 A
5483261 Yasutake Jan 1996 A
5483603 Luke et al. Jan 1996 A
5484966 Segen Jan 1996 A
5490655 Bates Feb 1996 A
5502568 Ogawa et al. Mar 1996 A
5525764 Junkins et al. Jun 1996 A
5528263 Platzker et al. Jun 1996 A
5528290 Saund Jun 1996 A
5537107 Funado Jul 1996 A
5554828 Primm Sep 1996 A
5581276 Cipolla et al. Dec 1996 A
5581637 Cass et al. Dec 1996 A
5594469 Freeman et al. Jan 1997 A
5594502 Bito et al. Jan 1997 A
5617312 Iura et al. Apr 1997 A
5638092 Eng et al. Jun 1997 A
5670755 Kwon Sep 1997 A
5686942 Ball Nov 1997 A
5729704 Stone et al. Mar 1998 A
5734375 Knox et al. Mar 1998 A
5736686 Perret, Jr. et al. Apr 1998 A
5737740 Henderson et al. Apr 1998 A
5745116 Pisutha-Arnond Apr 1998 A
5764223 Chang et al. Jun 1998 A
5771039 Ditzik Jun 1998 A
5790910 Haskin Aug 1998 A
5801704 Oohara et al. Sep 1998 A
5818421 Ogino et al. Oct 1998 A
5818424 Korth Oct 1998 A
5819201 DeGraaf Oct 1998 A
5825352 Bisset et al. Oct 1998 A
5831602 Sato et al. Nov 1998 A
5911004 Ohuchi et al. Jun 1999 A
5914709 Graham et al. Jun 1999 A
5920342 Umeda et al. Jul 1999 A
5936615 Waters Aug 1999 A
5943783 Jackson Aug 1999 A
5963199 Kato et al. Oct 1999 A
5982352 Pryor Nov 1999 A
5988645 Downing Nov 1999 A
6002808 Freeman Dec 1999 A
6008798 Mato, Jr. et al. Dec 1999 A
6031531 Kimble Feb 2000 A
6061177 Fujimoto May 2000 A
6075905 Herman et al. Jun 2000 A
6100538 Ogawa Aug 2000 A
6104387 Chery et al. Aug 2000 A
6118433 Jenkin et al. Sep 2000 A
6122865 Branc et al. Sep 2000 A
6128003 Smith et al. Oct 2000 A
6141000 Martin Oct 2000 A
6147678 Kumar et al. Nov 2000 A
6153836 Goszyk Nov 2000 A
6161066 Wright et al. Dec 2000 A
6179426 Rodriguez, Jr. et al. Jan 2001 B1
6188388 Arita et al. Feb 2001 B1
6191773 Maruno et al. Feb 2001 B1
6208329 Ballare Mar 2001 B1
6208330 Hasegawa et al. Mar 2001 B1
6209266 Branc et al. Apr 2001 B1
6226035 Korein et al. May 2001 B1
6229529 Yano et al. May 2001 B1
6252989 Geisler et al. Jun 2001 B1
6256033 Nguyen Jul 2001 B1
6262718 Findlay et al. Jul 2001 B1
6310610 Beaton et al. Oct 2001 B1
6323846 Westerman et al. Nov 2001 B1
6328270 Elberbaum Dec 2001 B1
6335724 Takekawa et al. Jan 2002 B1
6337681 Martin Jan 2002 B1
6339748 Hiramatsu Jan 2002 B1
6353434 Akebi et al. Mar 2002 B1
6359612 Peter et al. Mar 2002 B1
6414671 Gillespie et al. Jul 2002 B1
6414673 Wood et al. Jul 2002 B1
6421042 Omura et al. Jul 2002 B1
6427389 Branc et al. Aug 2002 B1
6429856 Omura et al. Aug 2002 B1
6496122 Sampsell Dec 2002 B2
6497608 Ho et al. Dec 2002 B2
6498602 Ogawa Dec 2002 B1
6507339 Tanaka Jan 2003 B1
6512838 Rafii et al. Jan 2003 B1
6517266 Saund Feb 2003 B2
6518600 Shaddock Feb 2003 B1
6522830 Yamagami Feb 2003 B2
6529189 Colgan et al. Mar 2003 B1
6530664 Vanderwerf et al. Mar 2003 B2
6531999 Trajkovic Mar 2003 B1
6545669 Kinawi et al. Apr 2003 B1
6559813 DeLuca et al. May 2003 B1
6563491 Omura May 2003 B1
6567078 Ogawa May 2003 B2
6567121 Kuno May 2003 B1
6570612 Saund et al. May 2003 B1
6577299 Schiller et al. Jun 2003 B1
6587099 Takekawa Jul 2003 B2
6594023 Omura et al. Jul 2003 B1
6597348 Yamazaki et al. Jul 2003 B1
6608619 Omura et al. Aug 2003 B2
6626718 Hiroki Sep 2003 B2
6630922 Fishkin et al. Oct 2003 B2
6633328 Byrd et al. Oct 2003 B1
6650822 Zhou Nov 2003 B1
6674424 Fujioka Jan 2004 B1
6683584 Ronzani et al. Jan 2004 B2
6690357 Dunton et al. Feb 2004 B1
6690363 Newton Feb 2004 B2
6690397 Daignault, Jr. Feb 2004 B1
6710770 Tomasi et al. Mar 2004 B2
6736321 Tsikos et al. May 2004 B2
6741250 Furlan et al. May 2004 B1
6747636 Martin Jun 2004 B2
6756910 Ohba et al. Jun 2004 B2
6760009 Omura et al. Jul 2004 B2
6760999 Branc et al. Jul 2004 B2
6774889 Zhang et al. Aug 2004 B1
6803906 Morrison et al. Oct 2004 B1
6864882 Newton Mar 2005 B2
6911972 Brinjes Jun 2005 B2
6919880 Morrison et al. Jul 2005 B2
6933981 Kishida et al. Aug 2005 B1
6947032 Morrison et al. Sep 2005 B2
6954197 Morrison et al. Oct 2005 B2
6972401 Akitt et al. Dec 2005 B2
6972753 Kimura et al. Dec 2005 B1
7007236 Dempski et al. Feb 2006 B2
7015418 Cahill et al. Mar 2006 B2
7030861 Westerman et al. Apr 2006 B1
7084868 Farag et al. Aug 2006 B2
7098392 Sitrick et al. Aug 2006 B2
7121470 McCall et al. Oct 2006 B2
7176904 Satoh Feb 2007 B2
7184030 McCharles et al. Feb 2007 B2
7187489 Miles Mar 2007 B2
7190496 Klug et al. Mar 2007 B2
7202860 Ogawa Apr 2007 B2
7232986 Worthington et al. Jun 2007 B2
7236132 Morrison et al. Jun 2007 B1
7274356 Ung et al. Sep 2007 B2
7355593 Hill et al. Apr 2008 B2
7414617 Ogawa Aug 2008 B2
7559664 Walleman et al. Jul 2009 B1
7619617 Morrison et al. Nov 2009 B2
7692625 Morrison et al. Apr 2010 B2
20010019325 Takekawa Sep 2001 A1
20010022579 Hirabayashi Sep 2001 A1
20010026268 Ito Oct 2001 A1
20010033274 Ong Oct 2001 A1
20020036617 Pryor Mar 2002 A1
20020050979 Oberoi et al. May 2002 A1
20020067922 Harris Jun 2002 A1
20020080123 Kennedy et al. Jun 2002 A1
20020145595 Satoh Oct 2002 A1
20020163530 Takakura et al. Nov 2002 A1
20030001825 Omura et al. Jan 2003 A1
20030025951 Pollard et al. Feb 2003 A1
20030043116 Morrison et al. Mar 2003 A1
20030046401 Abbott et al. Mar 2003 A1
20030063073 Geaghan et al. Apr 2003 A1
20030071858 Morohoshi Apr 2003 A1
20030085871 Ogawa May 2003 A1
20030095112 Kawano et al. May 2003 A1
20030142880 Hyodo Jul 2003 A1
20030151532 Chen et al. Aug 2003 A1
20030151562 Kulas Aug 2003 A1
20040021633 Rajkowski Feb 2004 A1
20040031779 Cahill et al. Feb 2004 A1
20040046749 Ikeda Mar 2004 A1
20040108990 Lieberman Jun 2004 A1
20040149892 Akitt et al. Aug 2004 A1
20040150630 Hinckley et al. Aug 2004 A1
20040169639 Pate et al. Sep 2004 A1
20040178993 Morrison et al. Sep 2004 A1
20040178997 Gillespie et al. Sep 2004 A1
20040179001 Morrison et al. Sep 2004 A1
20040189720 Wilson et al. Sep 2004 A1
20040252091 Ma et al. Dec 2004 A1
20050052427 Wu et al. Mar 2005 A1
20050057524 Hill et al. Mar 2005 A1
20050083308 Homer et al. Apr 2005 A1
20050151733 Sander et al. Jul 2005 A1
20050156900 Hill et al. Jul 2005 A1
20050190162 Newton Sep 2005 A1
20050243070 Ung et al. Nov 2005 A1
20050248540 Newton Nov 2005 A1
20050276448 Pryor Dec 2005 A1
20060022962 Morrison et al. Feb 2006 A1
20060158437 Blythe et al. Jul 2006 A1
20060202953 Pryor et al. Sep 2006 A1
20060227120 Eikman Oct 2006 A1
20060274067 Hikai Dec 2006 A1
20070019103 Lieberman et al. Jan 2007 A1
20070075648 Blythe et al. Apr 2007 A1
20070075982 Morrison et al. Apr 2007 A1
20070116333 Dempski et al. May 2007 A1
20070126755 Zhang et al. Jun 2007 A1
20070139932 Sun et al. Jun 2007 A1
20070236454 Ung et al. Oct 2007 A1
20080062149 Baruk Mar 2008 A1
20080129707 Pryor Jun 2008 A1
20080211766 Westerman et al. Sep 2008 A1
20080284733 Hill et al. Nov 2008 A1
20090146972 Morrison et al. Jun 2009 A1
20100073318 Hu et al. Mar 2010 A1
20100079407 Suggs Apr 2010 A1
20100214268 Huang et al. Aug 2010 A1
20120007804 Morrison et al. Jan 2012 A1
Foreign Referenced Citations (58)
Number Date Country
2412878 Jan 2002 CA
2493236 Dec 2003 CA
198 10 452 Dec 1998 DE
0 279 652 Aug 1988 EP
0 347 725 Dec 1989 EP
0 657 841 Jun 1995 EP
0 762 319 Mar 1997 EP
0 829 798 Mar 1998 EP
1 450 243 Aug 2004 EP
1 297 488 Nov 2006 EP
2204126 Nov 1988 GB
57-211637 Dec 1982 JP
61-196317 Aug 1986 JP
61-260322 Nov 1986 JP
3-054618 Mar 1991 JP
4-350715 Dec 1992 JP
4-355815 Dec 1992 JP
5-181605 Jul 1993 JP
5-189137 Jul 1993 JP
5-197810 Aug 1993 JP
7-110733 Apr 1995 JP
7-230352 Aug 1995 JP
8-016931 Feb 1996 JP
8-108689 Apr 1996 JP
8-240407 Sep 1996 JP
8-315152 Nov 1996 JP
9-091094 Apr 1997 JP
9-224111 Aug 1997 JP
9-319501 Dec 1997 JP
10-105324 Apr 1998 JP
11-051644 Feb 1999 JP
11-064026 Mar 1999 JP
11-085376 Mar 1999 JP
11-110116 Apr 1999 JP
2000-105671 Apr 2000 JP
2000-132340 May 2000 JP
2001-075735 Mar 2001 JP
2001-282456 Oct 2001 JP
2001-282457 Oct 2001 JP
2002-236547 Aug 2002 JP
2003-158597 May 2003 JP
2003-167669 Jun 2003 JP
2003-173237 Jun 2003 JP
WO 9807112 Feb 1999 WO
WO 9908897 Feb 1999 WO
WO 9921122 Apr 1999 WO
WO 9928812 Jun 1999 WO
WO 9940562 Aug 1999 WO
WO 0203316 Jan 2002 WO
WO 0207073 Jan 2002 WO
WO 0227461 Apr 2002 WO
WO 03105074 Dec 2003 WO
WO 2005034027 Apr 2005 WO
WO 2005106775 Nov 2005 WO
WO 2006095320 Sep 2006 WO
WO 2007003196 Jan 2007 WO
WO 2007064804 Jun 2007 WO
WO 2009146544 Dec 2009 WO
Non-Patent Literature Citations (53)
Entry
International Search Report for PCT/CA2008/001350 mailed Oct. 17, 2008 (5 Pages).
International Search Report and Written Opinion for PCT/CA2007/002184 mailed Mar. 13, 2008 (13 Pages).
International Search Report and Written Opinion for PCT/CA2004/001759 mailed Feb. 21, 2005 (7 Pages).
International Search Report for PCT/CA01/00980 mailed Oct. 22, 2001 (3 Pages).
International Search Report and Written Opinion for PCT/CA2009/000773 mailed Aug. 12, 2009 (11 Pages).
European Search Opinion for EP 07 25 0888 dated Jun. 22, 2007 (2 pages).
European Search Report for EP 07 25 0888 dated Jun. 22, 20067 (2 pages).
European Search Report for EP 06 01 9269 dated Nov. 9, 2006 (4 pages).
European Search Report for EP 06 01 9268 dated Nov. 9, 2006 (4 pages).
European Search Report for EP 04 25 1392 dated Jan. 11, 2007 (2 pages).
European Search Report for EP 02 25 3594 dated Dec. 14, 2005 (3 pages).
Partial European Search Report for EP 03 25 7166 dated May 19, 2006 (4 pages).
May 12, 2009 Office Action for Canadian Patent Application No. 2,412,878 (4 pages).
International Search Report and Written Opinion for PCT/CA2010/001085 mailed Oct. 12, 2010.
Jul. 5, 2010 Office Action, with English translation, for Japanese Patent Application No. 2005-000268 (6 pages).
Förstner, Wolfgang, “On Estimating Rotations”, Festschrift für Prof. Dr. -Ing. Heinrich Ebner Zum 60. Geburtstag, Herausg.: C. Heipke und H. Mayer, Lehrstuhl für Photogrammetrie und Fernerkundung, TU München, 1999, 12 pages. (http://www.ipb.uni-bonn.de/papers/#1999).
Funk, Bud K., CCD's in optical panels deliver high resolution, Electronic Design, Sep. 27, 1980, pp. 139-143.
Hartley, R. and Zisserman, A., “Multiple View Geometry in Computer Vision”, Cambridge University Press, First published 2000, Reprinted (with corrections) 2001, pp. 70-73, 92-93, and 98-99.
Kanatani, K., “Camera Calibration”, Geometric Computation for Machine Vision, Oxford Engineering Science Series, vol. 37, 1993, pp. 56-63.
Tapper, C.C., et al., “On-Line Handwriting Recognition—A Survey”, Proceedings of the International Conference on Pattern Recognition (ICPR), Rome, Nov. 14-17, 1988, Washington, IEEE Comp. Soc. Press. US, vol. 2 Conf. 9, Nov. 14, 1988, pp. 1123-1132.
Wang, F., et al., “Stereo camera calibration without absolute world coordinate information”, SPIE, vol. 2620, pp. 655-662, Jun. 14, 1995.
Wrobel, B., “minimum Solutions for Orientation”, Calibration and Orientation of Cameras in Computer Vision, Springer Series in Information Sciences, vol. 34, 2001, pp. 28-33.
Press Release, “IntuiLab introduces IntuiFace, An interactive table and its application platform” Nov. 30, 2007.
Overview page for IntuiFace by IntuiLab, Copyright 2008.
NASA Small Business Innovation Research Program: Composite List of Projects 1983-1989, Aug. 1990.
Touch Panel, vol. 1 No. 1 (2005).
Touch Panel, vol. 1 No. 2 (2005).
Touch Panel, vol. 1 No. 3 (2006).
Touch Panel, vol. 1 No. 4 (2006).
Touch Panel, vol. 1 No. 5 (2006).
Touch Panel, vol. 1 No. 6 (2006).
Touch Panel, vol. 1 No. 7 (2006).
Touch Panel, vol. 1 No. 8 (2006).
Touch Panel, vol. 1 No. 9 (2006).
Touch Panel, vol. 1 No. 10 (2006).
Touch Panel, vol. 2 No. 1 (2006).
Touch Panel, vol. 2 No. 2 (2007).
Touch Panel, vol. 2 No. 3 (2007).
Touch Panel, vol. 2 No. 4 (2007).
Touch Panel, vol. 2 No. 5 (2007).
Touch Panel, vol. 2 No. 6 (2007).
Touch Panel, vol. 2 No. 7-8 (2008).
Touch Panel, vol. 2 No. 9-10 (2008).
Touch Panel, vol. 3 No. 1-2 (2008).
Touch Panel, vol. 3 No. 3-4 (2008).
Touch Panel, vol. 3 No. 5-6 (2009).
Touch Panel, vol. 3 No. 7-8 (2009).
Touch Panel, vol. 3 No. 9 (2009).
Touch Panel, vol. 4 No. 2-3 (2009).
Touch Panel, vol. 5 No. 2-3 (Sep. 2010).
Touch Panel, vol. 5 No. 4 (Nov. 2010).
Villamor et al. “Touch Gesture Reference Guide”, Apr. 15, 2010.
Eric Weisstein, “Determinant—from Wolfram MathWorld”, last updated on Feb. 16, 2009 and printed out as it appeared on the Internet on Feb. 18, 2009, retrieved from the Wayback Machine at <web.archive.org/web/20090218175549/http://rnathworld.wolfram.com/Determinant—html> on Mar. 11, 2013.
Related Publications (1)
Number Date Country
20110006981 A1 Jan 2011 US