Interactive electronic display surfaces allow human users to use the display surface as a mechanism both for viewing content, such as computer graphics, video, etc., as well as inputting information into the system. Examples of interactive display surfaces include common touch-screens and resistive whiteboards, for example. A whiteboard is analogous to a conventional chalkboard, except that a user “writes” on the whiteboard using an electronic hand-held input device that may look like a pen. The whiteboard is able to determine where the “pen” is pressing against the whiteboard and the whiteboard displays a mark wherever the “pen” is pressed against the whiteboard.
Conventional interactive display surfaces are capable of communicating with a single input device at any given time. That is, conventional interactive display surfaces are not equipped to receive simultaneous inputs from multiple input devices. If multiple input devices were to provide input to the conventional interactive display surface at the same time, errors would likely occur because the interactive display device would not be able to discern one input device from another. Thus, conventional interactive display surfaces are limited to function with a single input device at any given time.
The present invention was developed in light of these and other drawbacks.
The present invention will now be described, by way of example, with reference to the accompanying drawings, in which:
An interactive display system is disclosed that facilitates optical communication between a system controller or processor and an input device via a display surface. The optical communication, along with a feedback methodology, enables the interactive display system to receive simultaneous input from multiple input devices. The display surface may be a glass surface configured to display an optical light image generated by a digital light projector (DLP) in response to digital signals from the controller. The input devices may take various forms, such as pointing devices, game pieces, computer mice, etc., that include an optical receiver and a transmitter of some sort. The DLP sequentially projects a series of visible images (frames) to the display surface to generate a continuous moving video or graphic, such as a movie video, a video game, computer graphics, Internet Web pages, etc. The DLP also projects subliminal optical signals interspersed among the visible images. The subliminal signals are invisible to the human eye. However, optical receivers within the input devices receive the subliminal optical encoded signals. In this way, the controller can communicate information to the input devices in the form of optical signals via the DLP and the interactive display surface. To locate the physical positions of input devices on the display surface, the controller can transmit a subliminal positioning signal over the display surface, using various methodologies. When an input device receives the subliminal positioning signal, the input device can send a unique feedback signal (using various techniques) to the controller, effectively establishing a “handshake” between the controller and the particular input device. As a result of the unique feedback signals, the controller knows where each of the input devices is located on the display surface and can individually establish simultaneous two-way communication with the input devices for the remaining portion of the image frame. Once the controller knows where the different input devices on the display surface are located, various actions can be taken, including effecting communication between the controller and the input devices, as well as effecting communication between the various input devices through the controller.
Referring now to
With reference to
The interactive display system 10 further includes one or more input devices, shown in
As shown in
The DLP 16 may take a variety of forms. In general, the DLP 16 generates a viewable digital image on the display surface 14 by projecting a plurality of pixels of light onto the display surface 14. It is common for each viewable image to be made up from millions of pixels. Each pixel is individually controlled by the DLP 16 to have a certain color (or grey-scale). The combination of many light pixels of different colors (or grey-scales) on the display surface 14 generates a viewable image or “frame.” Continuous video and graphics are generated by sequentially combining frames together, as in a motion picture.
One embodiment of a DLP 16 includes a digital micro-mirror device (DMD) to project the light pixels onto the display surface 14. Other embodiments could include diffractive light devices (DLD), liquid crystal on silicon devices (LCOS), plasma displays, and liquid crystal displays to just name a few. Other spatial light modulator and display technologies are known to those of skill in the art and could be substituted and still meet the spirit and scope of the invention. A close-up view of a portion of an exemplary DMD is illustrated in
As shown in
The optical signals received by the input devices D1, D2,DN are transmitted by the DLP 16 interspersed among the visible optical images projected onto the display surface 14 in such a way that the optical signals are not discernable by the human eye. Thus, the visible image is not noticeably degraded. For instance, where the DLP 16 includes a DMD device, a given micro-mirror of the DMD can be programmed to send a digital optical signal interspersed among the repetitive tilting of the micro-mirror that causes a particular color (or grey-scale) to be projected to the display surface for each image frame. While the interspersed optical signal may theoretically alter the color (or grey-scale) of that particular pixel, the alteration is generally so slight that it is undetectable by the human eye. The optical signal transmitted by the DMD may be in the form of a series of optical pulses that are coded according to a variety of known encoding techniques.
Two-way communication between the controller 18 and each input device allows the interactive display system 10 to accommodate simultaneous input from multiple input devices. As described above, other known systems are not able to accommodate multiple input devices simultaneously providing input to the system because other systems are incapable of identifying and distinguishing between the multiple input devices. Two-way communication between the input devices D1, D2, DN and the controller 18 allows the system to use a feed-back mechanism to establish a unique “handshake” between each input device D1, D2,DN and the controller 18. In particular, for each frame (still image) generated on the display surface 14, the DLP 16 projects subliminal optical positioning signals to the display surface 14 to locate the input devices D1, D2, DN, and, in response, the input devices D1, D2, DN send feedback signals to the controller 18 to establish a “handshake” between each input device and the controller 18. This may occur for each frame of visible content on the display surface 14. In general, for each image frame, the controller 18 causes one or more subliminal optical signals to be projected onto the display surface 18, and the input devices D1, D2, DN respond to the subliminal signals in such a way so that the controller 18 is able to uniquely identify each of the input devices D1, D2, DN, thereby establishing the “handshake” for the particular frame.
The unique “handshake” can be accomplished in various ways. In one embodiment, the controller 18 can cause the DLP 16 to sequentially send out a uniquely-coded positioning signal to each pixel or group of pixels on the display surface 14. When the positioning signal is transmitted to the pixel (or group of pixels) over which the receiver of one of the input devices is positioned, the input device receives the optical positioning signal, and, in response, transmits a unique ID signal (via its transmitter) to the controller 18. The ID signal uniquely identifies the particular input device from which it was transmitted. When the controller receives a unique ID signal from one of the input devices in response to a positioning signal transmitted to a particular pixel, the controller 18 knows where that particular input device is positioned on the display surface. Specifically, the input device is positioned directly over the pixel (or group of pixels) that projected the positioning signal when the input device sent its feedback ID signal to the controller 18. In this way, a feedback “handshake” is established between each of the input devices on the display surface and the controller 18. Thereafter, the controller 18 and input devices can communicate with each other for the remaining portion of the frame—the controller can send optical data signals to the input devices via their respective associated pixels, and the input devices can send data signals to the controller 18 via their respective transmitters—and the controller will be able to distinguish among the various input signals that it receives during that frame. This process can be repeated for each image frame. In this way, the position of each input device on the display surface can be accurately identified from frame to frame.
The methodology for establishing the “handshake” for each of the input devices will now be described in more detail in the context of a system using two input devices D1 and D2. For each image frame generated by the DLP 16, the controller 18 causes the DLP 16 to sequentially project a unique positioning signal to each pixel (or group of pixels) on the display surface 14, i.e., one after another. The positioning signal can be sequentially transmitted to the pixels on the display surface 14 in any pattern—for example, the positioning signal could be transmitted to the pixels (or groups of pixels) row-by-row, starting at the top row of the image frame. The positioning signal projected to most of the pixels (or groups of pixels) will not be received by either of the input devices. However, when the positioning signal is projected to the pixel (or group of pixels) over which the receiver of the first input devices rests, the receiver of the first input device will receive the positioning signal, and the transmitter of the input device will transmit a unique ID signal back to the controller 18, effectively identifying the input device to the controller 18. In this way, the controller will know where the first input device is located on the display surface 14. Similarly, the controller will continue to cause the DLP 16 to project the subliminal positioning signal to the remaining pixels (or groups of pixels) of the image frame. As with the first input device, the second input device will transmit its own unique ID signal back to the controller 18 when it receives the positioning signal from the DLP 16. At that point, the controller 18 knows precisely where each of the input devices D1, D2 is located on the display screen. Therefore, for the remaining portion of the frame, the controller 18 can optically send information to each of the input devices by sending optical signals through the pixel over which the receiver of the particular input device is located. Similarly, for the remaining portion of the frame, each input device can send signals to the controller (via RF, IR, hardwire, optical, etc.), and the controller will be able to associate the signals that it receives with the particular input device that transmitted it and the physical location of the input device on the display surface 14.
Several variations can be implemented with this methodology for establishing a “handshake” between the input devices D1, DN and the controller 18. For instance, once the input devices are initially located on the display surface 14, the controller 18 may not need to transmit the positioning signal to all of the pixels (or groups of pixels) on the display surface in subsequent image frames. Because the input devices will normally move between adjacent portions of the display surface 14, the controller 18 may cause the subliminal positioning signals to be transmitted only to those pixels that surround the last known positions of the input devices on the display surface 14. Alternatively, multiple different subliminal positioning signals can be projected to the display surface, each coded uniquely relative to each other. Multiple positioning signals would allow faster location of the input devices on the display surface.
Another method may include sending the positioning signal(s) to large portions of the display surface at the same time and sequentially narrowing the area of the screen where the input device(s) may be located. For example, the controller 18 could logically divide the display surface in half and sequentially send a positioning signal to each of the screen halves. If the controller does not receive any “handshake” signals back from an input device in response to the positioning signal being projected to one of the screen halves, the controller “knows” that there is no input devices positioned on that half of the display surface. Using this method, the display surface 14 can logically be divided up into any number of sections, and, using the process of elimination, the input devices can be located more quickly than by simply scanning across each row of the entire display surface. This method would allow each of the input devices to be located more quickly in each image frame.
In another embodiment, once each of the input devices are affirmatively located on the display surface 14, the controller 18 could cause the DLP 16 to stop projecting image content to the pixels on the display surface under the input devices. Because the input devices would be covering these pixels anyway (and thus they would be non-viewable by a human user), there would be no need to project image content to those pixels. With no image content, all of the pixels under each of the input devices could be used continuously to transmit data to the input device. With no image content, the controller could transmit higher amounts of data in the same time frame.
The ability to allow multiple input devices to simultaneously communicate data to the system has a variety of applications. For example, the interactive display system can be used for interactive video/computer gaming, where multiple game pieces (input devices) can communicate with the system simultaneously. In one gaming embodiment, the display surface 14 may be set up as a chess board with thirty two input devices, each input device being one of the chess pieces. The described interactive display system allows each of the chess pieces to communicate with the system simultaneously, allowing the system to track the moves of the pieces on the board. In another embodiment, the display surface can be used as a collaborative work surface, where multiple human users “write” on the display surface using multiple input devices (such as pens) at the same time.
In another embodiment, the interactive display system can be used such that multiple users can access the resources of a single controller (such as a personal computer, including its storage disk drives and its connection to the Internet, for example) through a single display surface to perform separate tasks. For example, an interactive display system could be configured to allow each of several users to access different Web sites, PC applications, or other tasks on a single personal computer through a single display surface. For instance, the “table” of
In some embodiments, it may be useful for the various input devices positioned on the display surface to communicate with each other. This can be accomplished by communicating from one input device to another through the display surface. Specifically, once the various input devices are located on the display surface, a first input device can transmit data information to the controller 18 via its transmitter (such as, via infrared, radio frequency, hard wires, etc.), and the controller 18, in turn, can relay that information to a second input device optically, as described hereinabove. The second input device can respond to the first input device through the controller 18 in similar fashion.
While the present invention has been particularly shown and described with reference to the foregoing preferred and alternative embodiments, it should be understood by those skilled in the art that various alternatives to the embodiments of the invention described herein may be employed in practicing the invention without departing from the spirit and scope of the invention as defined in the following claims. It is intended that the following claims define the scope of the invention and that the method and apparatus within the scope of these claims and their equivalents be covered thereby. This description of the invention should be understood to include all novel and non-obvious combinations of elements described herein, and claims may be presented in this or a later application to any novel and non-obvious combination of these elements. The foregoing embodiments are illustrative, and no single feature or element is essential to all possible combinations that may be claimed in this or a later application. Where the claims recite “a” or “a first” element of the equivalent thereof, such claims should be understood to include incorporation of one or more such elements, neither requiring nor excluding two or more such elements.