The present invention relates to input devices, and more particularly to an input device using two or more stimulus sensors.
Conventional content creation computing platforms, such as portable computing devices (e.g., laptop computers, tablet computers, and the like) are typically configured to receive input stimulus via a touch screen. The input device (e.g., touch screen), is usually provided by a resistive touch screen or a capacitive touch screen. The input stimulus may be applied by a pen, stylus, or human digit to enable a user draw or write something on the touch screen. In practice, a user's hand may also touch the screen along with the pen, stylus or user's digit. As a result, the touch screen may capture touch from the user's hand as well as from the pen, stylus, or user's digit. The touch screen may appear to the user to behave erratically when the actual response at the touch screen does not match the expected response.
The problem of capturing unintended touches by a user's hand can be partially solved using a software-based solution. For example, software may be configured to detect an area contacted by the user's palm on a touch screen that is larger than a typical touch performed by a pen, stylus, or user's digit. The software may filter the touches received at the touch screen and block any touches that contact a large area of the touch screen. However, unintended touches that contact a small area of the touch screen may not be blocked which is a shortcoming of the software-based solution. Thus, there is a need for addressing these issues and/or other issues associated with the prior art.
A system, method, and computer program product are provided for sensing input stimulus at an input device. The method includes the steps of configuring an input device comprising a first sensor layer and a second sensor layer to activate the first sensor layer and to deactivate the second sensor layer, where the second sensor layer is layered above the first sensor layer and associated with a stimulus device. When a request to activate the second sensor layer is received, the input device is configured to activate the second sensor layer to respond to stimulus received by the stimulus device and to deactivate the first sensor layer. A third sensor layer may be included in the input device and the third sensor layer may be associated with a different stimulus device.
In conventional touch screen products, only one touch sensor layer (i.e., grid) is provided at the surface of a display. But usually the input methods used in touch screen are different; i.e., finger sensing or stylus. Here our idea is to provide multiple touch sensors which respond to different kind of input device on same screen. To address this problem, our solution is to provide multiple layers of touch sensors as mentioned in below diagram:
At step 110, a request to activate the second sensor layer is received. The request to activate the second sensor layer may be received in response to activation of a stimulus device associated with the second sensor layer. In the context of the present description, a stimulus device associated with a touch-sensitive sensor layer may include a pen, stylus, or a human digit. In the context of the present description, a stimulus device associated with a light-sensitive sensor layer may include a laser pointer configured to generate a light beam at a particular wavelength. In one embodiment, the second sensor layer or the stimulus device may be activated from a user interface of an application program (e.g., a content creation program, presentation program, video game, or the like). In one embodiment, the stimulus device or the input device comprises a switch mechanism (e.g., button) that may be used to activate the stimulus device or the second sensor layer. In one embodiment, the stimulus device is activated when motion is detected (i.e., when a user picks up or otherwise repositions the stimulus device).
At step 115, the input device is configured to activate the second sensor layer to respond to stimulus received by the stimulus device and to deactivate the first sensor layer. When the first sensor layer is deactivated, stimulus received by the first sensor layer is discarded (or ignored). In general, stimulus received by an activated sensor layer is processed or responded to by the input device and stimulus received by a deactivated sensor layer is discarded or ignored by the input device.
In an example embodiment, a first sensor layer may be a touch-sensitive sensor layer configured to respond to stimulus provided by human digit input devices and a second sensor layer may be a touch-sensitive sensor layer configured to respond to stimulus provided by a stylus stimulus input device. The input device may be configured to discard stimulus applied by a human digit at the first sensor layer and to respond to stimulus applied by a stylus to the second sensor layer. Alternatively, the input device may be configured to discard stimulus applied by the stylus to the second sensor layer and respond to stimulus applied by a human digit at the first sensor layer.
More illustrative information will now be set forth regarding various optional architectures and features with which the foregoing framework may or may not be implemented, per the desires of the user. It should be strongly noted that the following information is set forth for illustrative purposes and should not be construed as limiting in any manner. Any of the following features may be optionally incorporated with or without the exclusion of other features described.
The stimulus device 220 may be a stylus stimulus input device that is associated with the sensor layer 210 and configured to provide input stimulus that is received by the sensor layer 220. The stimulus device 225 may be a user digit stimulus input device that is associated with the sensor layer 205 and configured to provide input stimulus that is received by the sensor layer 205. To avoid detection of unintended stimulus input, the system 200 includes control circuitry (not shown) that is configured to discard or ignore input stimulus received by one of the two sensor layers 205 and 210 at any particular time. Therefore, only one of the input stimulus devices 220 and 225 should be used at any particular time.
In one embodiment, the control circuitry is configured to discard stimulus input from the sensor layer 205 or 210 that is disabled. In another embodiment, the sensor layer 205 or 210 is not disabled, but signals received by the sensor layer do not result in any action taken based on the stimulus input or contribute to an image displayed at the display layer 215. In one embodiment, the sensor layer 205 may be a conventional capacitive touch-sensitive layer that is associated with the stimulus device 225 and the sensor layer 210 may be a non-capacitive sensor layer that is associated with the stimulus device 220. When the stimulus device 220 is activated, input stimulus received by the sensor layer 205 is discarded.
The sensor layer 210 may be activated when the stimulus device 220 is removed from a holder (e.g., slot) included in the system 200. In one embodiment, the sensor layer 210 is activated when a particular software application program (e.g., a content creation program) is launched by a user. As previously explained, the sensor layer 210 and sensor layer 205 may be activated through a user interface (e.g., pull-down menu, selection of an icon, or the like) or through a switch mechanism located on the input device (e.g., touchpad or touchscreen). In one embodiment, the stimulus device 220 comprises a switch mechanism that may be used to activate and deactivate the sensor layer 210.
For example, in one embodiment, the sensor layer 205 is a capacitive touch sensor and the sensor layer 210 is a touch-sensitive layer configured to only recognize stimulus input received from a stylus, such as the stimulus device 220. When a user is operating a content creation application program, the user holds the stimulus device 220 and activates the sensor layer 210 by enabling a switch on the stimulus device 220 to activate the stimulus device 220 and select a drawing tool. Stimulus input received by the sensor layer 205 is discarded by the system 200. Whenever the stimulus device 220 touches the sensor layer 210, stimulus input is processed and displayed at the display layer 215. When the user deactivates the sensor layer 210, for example by turning the switch off, the sensor layer 205 may become active and a tool, such as an eraser may be selected by default and operated by the user when the stimulus device 225 touches the sensor layer 205. While both stimulus devices 220 and 225 may be used to provide stimulus input to the input device, only one of the sensor layers is activated at a time, so that unintended stimulus input is discarded or ignored by the inactive sensor layer.
The stimulus device 235 may be a light generating pointer-type device that is associated with the sensor layer 230 and configured to provide input stimulus that is received by the sensor layer 230. In contrast with the stimulus devices 220 and 225, the stimulus device 235 need not touch or otherwise contact the associated sensor layer 230 to provide stimulus input. In practice, the stimulus device 235 may be located several meters away from the sensor layer 230, such as a laser pointer device that is used to project a light point cluster onto a screen.
To avoid detection of unintended stimulus input, the system 250 includes control circuitry (not shown) that is configured to discard input stimulus received by two of the three sensor layers 205, 210, and 230 at any particular time. Therefore, only one of the input stimulus devices 220, 225, and 235 should be used at any particular time.
In one embodiment, the control circuitry is configured to discard stimulus input received by two of the three sensor layers 205, 210, and 230. In another embodiment, the two sensor layers of the three sensor layers 205, 210, and 230 are not disabled, but signals received by the two sensor layers do not result in any action taken based on the stimulus input or contribute to an image displayed at the display layer 215. In one embodiment, a priority between the three sensor layers 205, 210, and 230 is defined and signals received by the two sensor layers having lower priority compared with the sensor layer having the highest priority are discarded or the two sensor layers having lower priority are disabled. Priority levels of each sensor layer 205, 210, and 230 may be specified by an application program, set through a user interface, or be predefined for the sensor layer. In one embodiment, the sensor layer 205 may be a conventional capacitive touch-sensitive layer that is associated with the stimulus device 225, the sensor layer 210 may be a non-capacitive sensor layer that is associated with the stimulus device 220, and the sensor layer 230 may be a solar cell light-sensitive sensor layer that is associated with the stimulus device 235. When the sensor layer 210 is activated, input stimuli received by the sensor layers 205 and 235 are ignored or discarded. When the sensor layer 205 is activated, input stimuli received by the sensor layers 210 and 235 are ignored or discarded. When the sensor layer 230 is activated, input stimuli received by the sensor layers 205 and 210 are ignored or discarded.
The sensor layer 230 may be activated when the stimulus device 235 is removed from a holder (e.g., slot) included in the system 250. In one embodiment, the sensor layer 230 is activated when a particular software application program (e.g., a content creation program or presentation application program) is launched by a user. As previously explained, one or more of the sensor layers 210, 205, and/or 230 may be activated through a user interface (e.g., pull-down menu, selection of an icon, or the like) or through a switch mechanism located on the input device (e.g., touchpad or touchscreen). In one embodiment, the stimulus device 235 comprises a switch mechanism that may be used to activate and deactivate the sensor layer 230.
The display processing unit 320 is configured to provide a display output 330 that may include image data to a display device. The display processing unit 320 is coupled to multi-sensor layer device 325 that is configured to provide stimulus input 355-B to the display processing unit 320. When the stimulus device 225 is activated, the display processing unit 320 may combine the stimulus input 355-B that is received by the sensor layer 205 with other image data to produce an image for display on the display device while ignoring the stimulus input 355-A. When the sensor layer 205 is activated, the display processing unit 320 may combine the stimulus input 355-A that is received by the sensor layer 210 with other image data to produce an image for display on the display device while ignoring the stimulus input 355-B. The display processing unit 320 may disable one of the sensor layers 205 and 210 in the multi-sensor layer input device 325 based on commands received via the device driver 315 from the application program 310. The display processing unit 320 may disable one of the sensor layers 205 and 210 in the multi-sensor layer input device 325 based on an indication that one of the sensor layers 205 and 210 is activated received via the respective sensor layer 205 or 210.
In one embodiment, the display processing unit 320 is a graphics processing unit (GPU) included in a computer system such as a desktop computer system, a laptop computer system, a tablet device, and the like. The GPU may be configured to render graphics data, such as data that represents a 3D model of a scene, to generate images for display on the display layer 215 or a display device. The GPU may also be coupled to a host processor such as a central processing unit (CPU). The CPU may execute a device driver for the GPU that enables an application program 310 to provide a graphical user interface for a user to activate and deactivate the sensor layer 205 and 210.
In an alternative embodiment, the input device may be coupled to a system bus and stimulus input from the input device may be processed by an operating system on a CPU. The stimulus input may then be processed by the application program 310 and/or device driver 315 to modify commands sent to the display processing unit 320 and thereby affect an image generated for display on the display device. The architectures set forth herein are for example only and any architecture including the input device is within the scope of the present disclosure.
The input layer control unit 345 receives input device data from two or more layer input units 340. In one embodiment, two or more layer input units 340 are included in the display processing unit 320, where each layer input unit 340 corresponds to a separate sensor layer. For example, the layer input unit 340-A may correspond to one of the sensor layers 205, 210, and 230 and the stimulus input 355-A may be received from one of the stimulus devices 225, 220, and 235, respectively. Similarly, the layer input unit 340-B may correspond to one of the sensor layers 205, 210, and 230 and the stimulus input 355-B may be received from one of the stimulus devices 225, 220, and 235, respectively. The optional layer input unit 340-C may correspond to one of the sensor layers 205, 210, and 230 and the stimulus input 355-C may be received from one of the stimulus devices 225, 220, and 235, respectively.
In one embodiment, the input layer control unit 345 is configured to disable layer input units 340 corresponding to sensor layers that are not activated so that the disabled layer input units 340 do not provide input device data to the input layer control unit 345. The input layer control unit 345 may be configured to disable one or more of the layer input units 340 based on priority levels associated with the sensor layers. In another embodiment, the input layer control unit 345 receives input device data from one or more of the layer input units 340 and discards the input device data from layer input units 340 that correspond to sensor layers that are not activated. The input layer control unit 345 may be configured to discard the input device data from one or more of the layer input units 340 based on priority levels associated with the sensor layers.
At step 405, the input layer control unit 345 configures an input device comprising a first sensor layer, a second sensor layer, and a third sensor layer to respond to stimulus received by the first sensor layer and to discard stimulus received by the second sensor layer and the third sensor layer. At step 410, the input layer control unit 345 determines if an activation event has occurred. In one embodiment, an activation event occurs when an inactive (i.e., deactivated) sensor layer is activated. In one embodiment, an activation event occurs when a stimulus device associated with an inactive sensor layer is activated. A sensor layer and/or stimulus device may be activated from a user interface of an application program or by a switch mechanism. A stimulus device may be activated when movement of the stimulus device is detected.
If, at step 410, the input layer control unit 345 determines that an activation event has not occurred, then at step 415, the input layer control unit 345 provides stimulus received by the active sensor layer to the image processing unit 350. The image processing unit 350 is configured to produces an image for display at a display layer and output the image via display output 330.
If, at step 410, the input layer control unit 345 determines that an activation event has occurred, then, at step 420, the input layer control unit 345 configures the input device to activate the second sensor layer and to deactivate the first sensor layer and the third sensor layers. When the second sensor layer is activated, the input device responds to stimulus received by the second sensor layer. When the first sensor layer and third sensor layer are deactivated, the input device discards stimulus received by the first sensor layer and the third sensor layer, respectively. At step 425, the input layer control unit 345 determines if a termination event has occurred. A termination event may occur when the input device is disabled (e.g., powered down or shut down) or when an activated stimulus device has been idle for a period of time. If, at step 425, the input layer control unit 345 determines that a termination event has occurred, then the method 400 terminates. Otherwise, the input layer control unit 345 returns to step 410.
The system 500 also includes input devices 512, a graphics processor 506, and a display 508, i.e. a conventional CRT (cathode ray tube), LCD (liquid crystal display), LED (light emitting diode), plasma display or the like. User input may be received from the input devices 512, e.g., keyboard, mouse, touchpad, microphone, and the like. In one embodiment, the display 508 may be a touchscreen or other display device including one or more sensor layers, and stimulus input devices may be associated with one or more of the sensor layers.
In one embodiment, the graphics processor 506 may include a plurality of shader modules, a rasterization module, etc. Each of the foregoing modules may even be situated on a single semiconductor platform to form a graphics processing unit (GPU). When the input devices 512 comprises a touchpad or other input device including one or more sensor layers, stimulus input devices may be associated with one or more of the sensor layers. The graphics processor 506 or central processor 501 may be configured to receive stimulus input receives by the one or more sensor layers and process the stimulus input to produce an image for display by the display 508.
In the present description, a single semiconductor platform may refer to a sole unitary semiconductor-based integrated circuit or chip. It should be noted that the term single semiconductor platform may also refer to multi-chip modules with increased connectivity which simulate on-chip operation, and make substantial improvements over utilizing a conventional central processing unit (CPU) and bus implementation. Of course, the various modules may also be situated separately or in various combinations of semiconductor platforms per the desires of the user.
The system 500 may also include a secondary storage 510. The secondary storage 510 includes, for example, a hard disk drive and/or a removable storage drive, representing a floppy disk drive, a magnetic tape drive, a compact disk drive, digital versatile disk (DVD) drive, recording device, universal serial bus (USB) flash memory. The removable storage drive reads from and/or writes to a removable storage unit in a well-known manner.
Computer programs, or computer control logic algorithms, may be stored in the main memory 504 and/or the secondary storage 510. Such computer programs, when executed, enable the system 500 to perform various functions. The memory 504, the storage 510, and/or any other storage are possible examples of computer-readable media.
In one embodiment, the architecture and/or functionality of the various previous figures may be implemented in the context of the central processor 501, the graphics processor 506, an integrated circuit (not shown) that is capable of at least a portion of the capabilities of both the central processor 501 and the graphics processor 506, a chipset (i.e., a group of integrated circuits designed to work and sold as a unit for performing related functions, etc.), and/or any other integrated circuit for that matter.
Still yet, the architecture and/or functionality of the various previous figures may be implemented in the context of a general computer system, a circuit board system, a game console system dedicated for entertainment purposes, an application-specific system, and/or any other desired system. For example, the system 500 may take the form of a desktop computer, laptop computer, server, workstation, game consoles, embedded system, and/or any other type of logic. Still yet, the system 500 may take the form of various other devices including, but not limited to a personal digital assistant (PDA) device, a mobile phone device, a television, etc.
Further, while not shown, the system 500 may be coupled to a network (e.g., a telecommunications network, local area network (LAN), wireless network, wide area network (WAN) such as the Internet, peer-to-peer network, cable network, or the like) for communication purposes.
While various embodiments have been described above, it should be understood that they have been presented by way of example only, and not limitation. Thus, the breadth and scope of a preferred embodiment should not be limited by any of the above-described exemplary embodiments, but should be defined only in accordance with the following claims and their equivalents.