The present invention relates generally to interactive input systems, and in particular to a method for distinguishing between a plurality of pointers in an interactive input system and to an interactive input system employing the method.
Interactive input systems that allow users to inject input into an application program using an active pointer (e.g. a pointer that emits light, sound or other signal), a passive pointer (e.g. a finger, cylinder or other object) or other suitable input device such as for example, a mouse or trackball, are well known. These interactive input systems include but are not limited to: touch systems comprising touch panels employing analog resistive or machine vision technology to register pointer input such as those disclosed in U.S. Pat. Nos. 5,448,263; 6,141,000; 6,337,681; 6,747,636; 6,803,906; 7,232,986; 7,236,162; and 7,274,356 and in U.S. Patent Application Publication No. 2004/0179001 assigned to SMART Technologies ULC of Calgary, Alberta, Canada, assignee of the subject application, the contents of which are incorporated by reference; touch systems comprising touch panels employing electromagnetic, capacitive, acoustic or other technologies to register pointer input; tablet personal computers (PCs); touch-enabled laptop PCs; personal digital assistants (PDAs); and other similar devices.
In order to facilitate the detection of pointers relative to an interactive surface, various techniques may be employed. For example, U.S. Pat. No. 6,346,966 to Toh describes an image acquisition system that allows different lighting techniques to be applied to a scene containing an object of interest concurrently. Within a single position, multiple images which are illuminated by different lighting techniques can be acquired by selecting specific wavelength bands for acquiring each of the images. In a typical application, both back lighting and front lighting can be simultaneously used to illuminate an object, and different image analysis methods may be applied to the images.
U.S. Pat. No. 4,787,012 to Guskin describes a method and apparatus for illuminating a subject being photographed by a camera by generating infrared light from an infrared light source and illuminating the subject with the infrared light. The source of infrared light is preferably mounted in or on the camera to shine on the face of the subject being photographed.
According to U.S. Patent Application Publication No. 2006/0170658 to Nakamura et al., in order to enhance both the accuracy of determining whether an object has contacted a screen and the accuracy of calculating the coordinate position of the object, edges of an imaged image are detected by an edge detection circuit, whereby using the edges, a contact determination circuit determines whether or not the object has contacted the screen. A calibration circuit controls the sensitivity of optical sensors in response to external light, whereby a drive condition of the optical sensors is changed based on the output values of the optical sensors.
U.S. Patent Application Publication No. 2005/0248540 to Newton describes a touch panel that has a front surface, a rear surface, a plurality of edges, and an interior volume. An energy source is positioned in proximity to a first edge of the touch panel and is configured to emit energy that is propagated within the interior volume of the touch panel. A diffusing reflector is positioned in proximity to the front surface of the touch panel for diffusively reflecting at least a portion of the energy that escapes from the interior volume. At least one detector is positioned in proximity to the first edge of the touch panel and is configured to detect intensity levels of the energy that is diffusively reflected across the front surface of the touch panel. Preferably, two detectors are spaced apart from each other in proximity to the first edge of the touch panel to allow calculation of touch locations using simple triangulation techniques.
U.S. Patent Application Publication No. 2003/0161524 to King describes a method and system to improve the ability of a machine vision system to distinguish the desired features of a target by taking images of the target under different one or more lighting conditions, and using image analysis to extract information of interest about the target. Ultraviolet light is used alone or in connection with direct on-axis and/or low angle lighting to highlight the different features of the target. One or more filters disposed between the target and the camera help to filter out unwanted light from the one or more images taken by the camera. The images may be analyzed by conventional image analysis techniques and the results recorded or displayed on a computer display device.
In interactive input systems using rear projection devices (such as rear projection displays, liquid crystal display (LCD) televisions, plasma televisions, etc.), to generate the image that is presented on the input surface, multiple pointers are difficult to determine and track, especially in machine vision interactive input systems that employ two imaging devices. Pointer locations in the images seen by each imaging device may be differentiated using methods such as pointer size, or intensity of the light reflected on the pointer, etc. Although these methods work well in controlled environments, when used in uncontrolled environments, these methods suffer drawbacks due to, for example, ambient lighting effects such as reflected light. Such lighting effects may cause a pointer in the background to appear brighter to an imaging device than a pointer in the foreground, resulting in the incorrect pointer being identified as closer to the imaging device. In machine vision interactive input systems employing two imaging devices, there are some positions where one pointer will obscure another pointer from one of the imaging devices, resulting in ambiguity as to the location of the true pointer. As more pointers are brought into the fields of view of the imaging devices, the likelihood of this ambiguity increases. This ambiguity causes difficulties in triangulating pointer positions.
It is therefore an object of the present invention at least to provide a novel method for distinguishing between a plurality of pointers in an interactive input system and to an interactive input system employing the method.
Accordingly, in one aspect there is provided a method for distinguishing between a plurality of pointers in an interactive input system comprising calculating a plurality of potential coordinates for a plurality of pointers in proximity of an input surface of the interactive input system; displaying visual indicators associated with each potential coordinate on the input surface; and determining real pointer locations and imaginary pointer locations associated with each potential coordinate from the visual indicators.
According to another aspect there is provided a method for distinguishing at least two pointers in an interactive input system comprising the steps of calculating touch point coordinates associated with each of the at least two pointers in contact with an input surface of the interactive input system; displaying a first visual indicator on the input surface at regions associated with a first pair of touch point coordinates and displaying a second visual indicator on the input surface at regions associated with a second pair of touch point coordinates; capturing with an imaging system a first image of the input surface during the display of the first visual indicator and the second visual indicator on the input surface at the regions associated with the first and second pairs of touch point coordinates; displaying the second visual indicator on the input surface at the regions associated with the first pair of touch point coordinates and the first visual indicator on the input surface at regions associated with the second pair of touch point coordinates; capturing with the imaging device system a second image of the input surface during the display of the second visual indicator on the input surface at the regions associated with the first pair of touch point coordinates and the first visual indicator on the input surface at the regions associated with the second pair of touch point coordinates; and comparing the first image to the second image to verify real touch point coordinates from the first pair and second pair of touch point coordinates.
According to yet another aspect there is provided an interactive input system comprising a touch panel having an input surface; an imaging device system operable to capture images of an input area of the input surface when at least one pointer is in contact with the input surface; and a video control device operatively coupled to the touch panel, the video control device enabling displaying of an image pattern on the input surface at a region associated with the at least one pointer, wherein the image pattern facilitates verification of the location of the at least one pointer.
According to yet another aspect there is provided a method for determining a location for at least one pointer in an interactive input system comprising calculating at least one touch point coordinate of at least one pointer on an input surface; displaying a first visual indicator on the input surface at a region associated with the at least one touch point coordinate; capturing a first image of the input surface using an imaging system of the interactive input system while the first visual indicator is displayed; displaying a second visual indicator on the input surface at the region associated with the at least one touch point coordinate; capturing a second image of the input surface using the imaging system while the second visual indicator is displayed; and comparing the first image to the second image to verify the location on the input surface of the at least one pointer.
According to yet another aspect there is provided a method for determining at least one pointer location in an interactive input system comprising displaying a first pattern on an input surface of the interactive input system at regions associated with the at least one pointer; capturing with an imaging device system a first image of the input surface during the display of the first pattern; displaying a second pattern at the regions associated with the at least one pointer; capturing with the imaging device system a second image of the input surface during the display of the second pattern; and processing the first image from the second image to calculate a differential image to isolate change in ambient light.
According to yet another aspect there is provided an interactive input system comprising a touch panel having an input surface; an imaging device system operable to capture images of the input surface; at least one active pointer contacting the input surface, the at least one active pointer having a sensor for sensing changes in light from the input surface; and a video control device operatively coupled to the touch panel and in communication with the at least one active pointer, the video control device enabling displaying of an image pattern on the input surface at a region associated with the at least one pointer, the image pattern facilitating verification of the location of the at least one pointer.
According to yet another aspect there is provided a computer readable medium embodying a computer program, the computer program comprising program code for calculating a plurality of potential coordinates for a plurality of pointers in proximity of an input surface of an interactive input system; program code for causing visual indicators associated with each potential coordinate to be displayed on the input surface; and program code for determining real pointer locations and imaginary pointer locations associated with each potential coordinate from the visual indicators.
According to yet another aspect there is provided a computer readable medium embodying a computer program, the computer program comprising program code for calculating a pair of touch point coordinates associated with each of the at least two pointers in contact with an input surface of an interactive input system; program code for causing a first visual indicator to be displayed on the input surface at regions associated with a first pair of touch point coordinates and for causing a second visual indicator to be displayed on the input surface at regions associated with a second pair of touch point coordinates; program code for causing an imaging system to capture a first image of the input surface during the display of the first pattern and the second pattern on the input surface at the regions associated with the first and second pairs of touch point coordinates; program code for causing the second pattern to be displayed on the input surface at the regions associated with the first pair of touch point coordinates and for causing the first pattern to be displayed on the input surface at regions associated with the second pair of touch point coordinates; program code for causing the imaging device system to capture a second image of the input surface during the display of the second pattern on the input surface at the regions associated with the first pair of touch point coordinates and the first pattern on the input surface at the regions associated with the second pair of touch point coordinates; and program code for comparing the first image to the second image to verify real touch point coordinates from the first pair and second pair of touch point coordinates.
According to still yet another aspect there is provided a computer readable medium embodying a computer program, the computer program comprising program code for calculating at least one touch point coordinate of at least one pointer on an input surface; program code for causing a first visual indicator to be displayed on the input surface at a region associated with the at least one touch point coordinate; program code for causing a first image of the input surface to be captured using an imaging system while the first visual indicator is displayed; program code for causing a second visual indicator to be displayed on the input surface at the region associated with the at least one touch point coordinate; program code for causing a second image of the input surface to be captured using the imaging system while the second visual indicator is displayed; and program code for comparing the first image to the second image to verify the location on the input surface of the at least one pointer.
According to still yet another aspect there is provided a computer readable medium embodying a computer program, the computer program comprising program code for causing a first pattern to be displayed on an input surface of an interactive input system at regions associated with at least one pointer; program code for causing a first image of the input surface to be captured with an imaging device system during the display of the first pattern; program code for causing a second pattern to be displayed on the input surface at the regions associated with the at least one pointer; program code for causing the imaging device system to capture a second image of the input surface during the display of the second pattern; and program code for processing the first image from the second image to calculate a differential image to isolate change in ambient light.
Embodiments will now be described more fully with reference to the accompanying drawings in which:
Turning now to
Touch panel 22 is coupled to a master controller 30. Master controller 30 is coupled to a video controller 34 and a processing structure 32. Processing structure 32 executes one or more application programs and uses touch point location information communicated from the interactive input system 20 via master controller 30 to generate and update display images presented on touch panel 22 via video controller 34. In this manner, interaction, or touch points are recorded as writing or drawing or used to execute commands associated with application programs on processing structure 32.
The processing structure 32 in this embodiment is a general purpose computing device in the form of a computer. The computer comprises for example a processing unit, system memory (volatile and/or non-volatile memory), other removable or non-removable memory (hard drive, RAM, ROM, EEPROM, CD-ROM, DVD, flash memory, etc.), and a system bus coupling various components to the processing unit. The processing unit runs a host software application/operating system which, during execution, provides a graphical user interface presented on the touch panel 22 such that freeform or handwritten ink objects and other objects can be input and manipulated via pointer interaction with the input surface 24 of the touch panel 22.
A pair of imaging devices 40 and 42 is disposed on frame 26 with each imaging device being positioned adjacent a different corner of the frame. Each imaging device is arranged so that its optical axis generally forms a 45 degree angle with adjacent sides of the frame. In this manner, each imaging device 40 and 42 captures the complete extent of input surface 24 within its field of view. One of ordinary skill in the art will appreciate that other optical axes or fields of view arrangements are possible.
Referring to
The CMOS camera image sensor comprises a Photo-bit PB300 image sensor configured for a 20×640 pixel sub-array that can be operated to capture image frames at high rates including those in excess of 200 frames per second. FIFO buffer 282 and DSP 284 are manufactured by Cypress under part number CY7C4211V and Analog Devices under part number ADSP2185M, respectively.
DSP 284 provides control information to the image sensor and lens assembly 280 via control bus 285. The control information allows DSP 284 to control parameters of the image sensor and lens assembly 280 such as exposure, gain, array configuration, reset and initialization. DSP 284 also provides clock signals to the image sensor and lens assembly 280 to control the frame rate of the image sensor and lens assembly 280. DSP 284 also communicates image information acquired from the image sensor and associated lens assembly 280 to master controller 30 via serial port 281.
Referring to
In the embodiment shown in
One of skill in the art will appreciate that the video modification could also be performed in software on the processing structure 32 with reduced performance. The two hardware methods mentioned above provide very fast response times and can be made synchronous with respect to the imaging devices (e.g. the cameras can capture a frame at the same time the video signal is being modified) compared to a software method.
Master controller 30 and imaging devices 40 and 42 follow a communication protocol that enables bi-directional communications via a common serial cable similar to that of a universal serial bus (USB), such as RS-232, etc. The transmission bandwidth is divided into thirty-two (32) 16-bit channels. Of the thirty-two channels, five (5) channels are assigned to each DSP 284 of imaging devices 40 and 42 and to DSP 390 in master controller 30. The remaining channels are unused and may be reserved for further expansion of control and image processing functionality (e.g., use of additional cameras). Master controller 30 monitors the channels assigned to imaging devices DSP 284 while DSP 284 in each imaging device 40 and 42 monitors the channels assigned to master controller DSP 390. Communications between the master controller 30 and imaging devices 40 and 42 are performed as background processes in response to interrupts.
In operation, each imaging device 40 and 42 acquires images of input surface 24 within the field of view of its image sensor and lens assembly 280 at the frame rate established by the clock of DSP 284. Once acquired, these images are processed by master controller 30 to determine the presence of a pointer within the captured image.
Pointer presence is detected by imaging devices 40 and 42 as touch points and may be one or more dark or illuminated regions that are created by generating a contrast difference at the region of contact of the pointer with the input surface 24. For example, the point of contact of the pointer may appear darker against a bright background region on the input surface 24. Alternatively, according to another example, the point of contact of the pointer may appear illuminated relative to a dark background. Pixel information associated with the one or more illuminated (or dark) regions received is captured by the image sensor and lens assembly 280 and then processed by camera DSPs 284.
If a pointer is present, the images are further processed to determine the pointer's characteristics and whether the pointer is in contact with input surface 24, or hovering above input surface 24. Pointer characteristics are then converted into pointer information packets (PIPs) and the PIPs are queued for transmission to master controller 30. Imaging devices 40 and 42 also receive and respond to diagnostic PIPs generated by master controller 30.
Master controller 30 polls imaging devices 40 and 42 at a set frequency (in this embodiment 70 times per second) for PIPs and triangulates pointer characteristics in the PIPs to determine pointer position data, where triangulation ambiguity is removed by using active interactive input system feedback. As one of skill in the art will appreciate, synchronous or asynchronous interrupts could also be used in place of fixed frequency polling.
Master controller 30 in turn transmits pointer position data and/or status information to processing structure 32. In this manner, the pointer position data transmitted to processing structure 32 can be recorded as writing or drawing or can be used to control execution of application programs executed by processing structure 32. Processing structure 32 also updates the display output conveyed to touch panel 22 so that information displayed on input surface 24 reflects the pointer activity.
Master controller 30 also receives commands from the processing structure 32, responds accordingly, and conveys diagnostic PIPs to imaging devices 40 and 42.
Interactive input system 20 operates with both passive pointers and active pointers. As mentioned above, a passive pointer is typically one that does not emit any signal when used in conjunction with the input surface. Passive pointers may include, for example, fingers, cylinders of material or other objects brought into contact with the input surface 24.
Turning to
Three types of ambiguities are shown in
In an alternative to the process shown in
The decoy touch points removal routine of step 508 is implemented to resolve decoy ambiguity. Such ambiguity occurs when at least one of the imaging devices 40 or 42 sees a decoy point due to, for example, ambient lighting conditions, an obstruction on the bezel or lens of the imaging device, such as dirt, or smudges, etc.
As shown in
As shown in
If the imaging device 40 does not sense any image illumination change along sight line 604 in
The touch points association routine of step 510 in
As shown in
Alternatively, in the first video frame, a bright spot may be displayed at one pointer location while dark spots are displayed at the remaining pointer locations. For example, location A may be bright while locations B, C, and D are dark. In the second video frame, a bright spot is displayed at another pointer location of the second pair, that is, at either location C or D. This allows for one of the real inputs to be identified by viewing the change in illumination of the locations where the spots are displayed. The other real input is then also determined because once one real input is known, so is the other. Alternatively, one dark spot and three bright spots may be used.
The above embodiment describes inserting spots at all target locations and testing all target locations simultaneously. Those of skill in the art will appreciate that other indicators and testing sequences may be employed. For example, during the touch points association routine of step 510, video controller 34 may display indicators of different intensities in different video frame sets at target touch point groups one at a time so that each point group is tested one-by-one. The routine finishes when a real touch point group is found. Alternatively, the video controller 34 may display a visual indicator of different intensities in different video frame sets at one point location at a time so that each target touch point is tested individually. This alternate embodiment may also be used to remove decoy points as discussed in the decoy points removal routine of step 508 at the same time. In a further alternate embodiment, the visual indicator could be positioned on the input surface 24 in locations that may be advantageous to the location of the imaging devices 40 and 42. For example, a bright spot may be displayed at the target touch point, but may be infinitesimally off-center such that it is closer to the imaging device 40, 42 along a vector from the touch point towards the imaging device 40, 42. This would result in the imaging device capturing a brighter illumination of a pointer if it is at that location.
Advantageously, as the capture rate of each imaging device sufficiently exceeds the refresh rate of the display, indicators can be inserted in few video frames and appear nearly subliminal to the user. To further reduce this distraction, camouflaging techniques such as water ripple effects under the pointer or longer flash sequences are subsequently provided with a positive target verification. These techniques help to disguise the artifacts perceived by a user and provide positive feedback confirming that a touch point has been correctly registered. Alternatively, the imaging devices 40 and 42 may have lower frame rates that capture images synchronously with the video controller in order to capture the indicators without being observed by the user.
The touch point location adjustment routine of step 512 in
c shows the touch point location adjustment routine of step 512 in
Those of skill in the art will appreciate that other patterns of indicators may be used during touch point location adjustment. For example, as shown in
The previous embodiments employ imaging devices 40 and/or 42 in detecting pointer position for triangulation and remove ambiguities by detecting changes in light intensity in pointer images captured by the imaging devices 40 and 42. In another embodiment, an active pointer is used to detect luminous changes around the pointer for removing ambiguities.
In the situation where the processing structure 32 is unable to determine an accurate active pointer location in an interactive input system using only two imaging devices 40 and 42, the tip of the active pointer 100 is brought into contact with the input surface 24 with sufficient force to push the actuator 106 into the tip 104, the sensors in tip 104 are powered ‘on’ and the radio frequency receiver 110 of interactive input system 20 is notified of the change in state of operation. In this mode, the active pointer provides a secure, spatially localized, communications channel from input surface 24 to the processing structure 32. Using a process similar to that described above, the processing structure 32 signals the video controller 34 to display indicators or artifacts in some video frames. The active pointer 100 senses the nearby illumination changes and transmits this illumination change information to the processing structure 32 via the communication channel 120. The processing structure 32 removes ambiguities based on the information it receives.
The same gradient patterns in
Alternatively, the adverse effects of ambient light may also be reduced by using multiple orthogonal modes of controlled lighting as disclosed in U.S. Provisional Patent Application No. 61/059,183 to Zhou et al. entitled “Interactive Input System And Method”, assigned to SMART Technologies ULC, the contents of which are incorporated by reference. Since the undesired ambient light generally consists of a steady component and several periodic components, the frequency and sequence of flashes generated by video controller 34 are specifically selected to avoid competing with the largest spectral contributions from DC light sources (e.g., sunlight) and AC light sources (e.g., fluorescent lamps). Selecting an eight Walsh code set and a native frame rate of 120 hertz with 8 subframes, for example, allows the system to filter out the unpredictable external light sources and to observe only the controlled light sources. Imaging devices 40 and 42 operate at the subframe rate of 960 frames per second while the DC and AC light sources are predominantly characterized by frequency contributions at 0 hertz and 120 hertz, respectively. Conversely, three of the eight Walsh codes have spectral nulls at both 0 hertz and 120 hertz (at a sample rate of 960 fps), and are individually modulated with the light for reflection by a pointer. The Walsh code generator is synchronized with the sensor shutters of imaging devices 40 and 42, whose captured images are correlated to eliminate the signal information captured from stray ambient light. Advantageously, the sensors are also less likely to saturate when their respective shutters operate at such a rapid frequency.
If desired, the active pointer may be provided with LEDs in place of sensors (not shown) in tip 104. The light emitted by the LEDs are modulated in a manner similar to that described above to avoid interference from stray light and to afford the system added features and flexibility. Some of these features are, for example, additional modes of use, assignment of color to multiple pens, as well as improved localization, association, and verification of pointer targets in multiple pointer environments and applications.
Alternatively, pointer identification for multiple users can be performed using the techniques described herein. For example, both user A and user B are writing on the input surface 24 with pointer A and pointer B respectively. By using different indicators under each pointer, each pointer can be uniquely identified. Each visual indicator for each pointer may differ in color or pattern. Alternatively, a bright spot under each pointer could be uniquely modulated. For example, a bright spot may be lit under pointer A while a dark spot is under pointer B, or pointer B remains unlit.
The ambiguity removal routines described herein apply to many different types of camera-based interactive devices with both active and passive pointers. In an alternative embodiment, LEDs are positioned at the imaging device and transmit light across the input surface to a retroreflective bezel. Light incident upon the retroreflective bezel returns to be captured by the imaging device and provides a backlight for passive pointers. Another alternative is to use lit bezels. In these embodiments, the retroreflective bezels or lit bezels are used to improve the images of the pointer to determine triangulation where an ambiguity exists. Alternatively, a single camera with a mirror configuration may also be used. In this embodiment, a mirror is used to obtain a second vector to the pointer in order to triangulate the pointer position. These processes are described in the previously incorporated U.S. Pat. No. 7,274,356 to Ung et al., as well as United States Patent Application Publication No. 2007/0236454 to Ung et al. assigned to SMART Technologies ULC, the contents of which are incorporated by reference.
Although the above embodiments of the interactive input system 20 are described based on using display monitor such as for example an LCD, CRT or plasma monitor, projectors may also be used for display screen images and flashes around the touch point positions.
By inserting indicators into some video frames as described before, the luminous intensity of around the pointer 1206 is changed and is sensed by the imaging devices 40 and 42. Such information is the sent to the processing structure 32 via the master controller 30. The processing structure 32 uses the triangulation results and the light intensity information of the pointer images to remove triangulation ambiguities.
Those of ordinary skill in the art will appreciate that the exact shape, pattern and frequency of indicators may be different to accommodate various applications or environments. For example, flashes may be square, circular, rectangular, oval, rings, or a line. Light intensity patterns may be linear, circular or rectangular. The rate of change of intensity within the pattern may also be linear, binary, parabolic, or random. In general, flash characteristics may be fixed or variable and dependant on the intensity of ambient light, pointer dimensions, user constraints, time, tracking tolerances, or other parameters of interactive input system 20 and its environment. In Europe and other places, for example, the frequency of electrical systems is 50 hertz and accordingly, the native frame rate and subframe rate may be 100 and 800 frames per second, respectively.
In an alternative embodiment, touch panel 22 comprises a display that emits IR light at each pixel location and the image sensors of imaging devices 40 and 42 are provided with IR filters. In this arrangement, the filters allow light originating from the display, and reflected by a target, to pass while stray light from the visible spectrum is prevented and removed from processing by the image processing engine.
In another embodiment, the camera image sensor of imaging devices 40 and 42 are replaced by a single photo-diode, photo-resister, or other light energy sensor. The feedback sequence in these embodiments may also be altered to accommodate the poorer resolution of alternate sensors. For example, the whole screen may be flashed, or raster scanned, to initiate the sequence, or at any time during the sequence. Once a target is located, its characteristics may be verified and associated by coding an illuminated sequence in the image pixels below the target or in a manner similar to that previously described.
In yet another embodiment, the interactive input system uses a color imaging device and the indicators that are displayed are colored or a colored pattern.
In a further embodiment of the ambiguity removal routine along a polar line (as shown in
In still another embodiment of the ambiguity removal routine along a polar line, a white or bright line is displayed on the input surface 24 and perpendicular to the line of sight of the imaging device 40 or 42. This white or bright line could move rapidly away from the imaging device similar to radar. When the line reaches the pointer, it will illuminate the pointer. Based on the distance the white line is from the imaging device, the distance and angle can be determined
Alternatively, the exchange of information between components may be accomplished via other industry standard interfaces. Such interfaces can include, but are not necessarily limited to RS232, PCI, Bluetooth, 802.11 (Wi-Fi), or any of their respective successors. Similarly, video controller 34, while analogue in one embodiment can be digital in another. The particular arrangement and configuration of components for interactive input system 20 may also be altered.
Those of skill in the art will also appreciate that other variations and modifications from those described may be made without departing from the scope and spirit of the invention, as defined by the appended claims.